I have a short HTML file that tries to add a one-line footer that will be common to several files by using w3IncludeHTML. Some of the time, it works, but some of the time the last bit of HTML before the call to w3IncludeHTML is replaced by the footer, with the end result being that the last bit is missing and the footer appears twice.
Here's my calling page:
<head>
<script src="http://www.w3schools.com/lib/w3data.js"></script>
</head>
<body>
« Previous • Home
<div w3-include-html="Footer.html"></div>
<script>
w3IncludeHTML();
</script>
</body>
Here's my footer page:
Footer.
When it works, my output looks like this:
« Previous • Home
Footer.
When it doesn't work, it looks like this:
« Previous •
Footer.
Footer.
There is no way I have found to force either behavior. Whether it works or not appears to be at random, with the error coming and going as I refresh the page over and over.
I'm using Chrome 53.0.2785.143 m and running Apache HTTP Server 2.4.7 as my localhost under Windows 10 Version 1607 Build 14393.321. (Problem may well be a Chrome issue, as IE11, FireFox 49, and Edge don't show the same behavior.)
Can anyone help me understand why this is happening? Thanks!
UPDATE:
I pulled out the JavaScript function's source and played with that a bit. The problem goes away if, at the point where the function opens an XMLHttpRequest, it asks for the subsequent send operation to be synchronous. This doesn't appear to be a real fix, though, as synchronous use of send is deprecated on the main thread. I'm no JavaScript guru, so I'm don't understand why making this operation asynchronous would cause the problem I am seeing. Is it a bug in Chrome, maybe?
Here's the JavaScript (please note I have changed the name of the function and the name of the attribute it looks for, and reformatted it a bit; other than that, it is that same as w3schools' version, but for the change from true to false in the call to the open method):
function xLuIncludeFile()
{
var z, i, a, file, xhttp;
z = document.getElementsByTagName("*");
for (i = 0; i < z.length; i++)
{
if (z[i].getAttribute("xlu-include-file"))
{
a = z[i].cloneNode(false);
file = z[i].getAttribute("xlu-include-file");
xhttp = new XMLHttpRequest();
xhttp.onreadystatechange = function()
{
if (xhttp.readyState == 4 && xhttp.status == 200)
{
a.removeAttribute("xlu-include-file");
a.innerHTML = xhttp.responseText;
z[i].parentNode.replaceChild(a, z[i]);
xLuIncludeFile();
}
}
// false makes the send operation synchronous, which solves a problem
// when using this function in short pages with Chrome. But it is
// deprecated on the main thread due to its impact on responsiveness.
// This call may end up throwing an exception someday.
xhttp.open("GET", file, false);
xhttp.send();
return;
}
}
}
The problem is caused by installing and enabling the "Chrome Stylist" extension, version 2.1.0. Disabling that extension makes the problem go away.
Related
after get request arrive the values are there and the button disappear
this is the button working on the localhost..
Im trying to add a Facebook share button in a page with dynamic contents like userid, coming from a esp826 server, using java script like innerhtml by ajax call get request once of json when the page body is on-load. When i test in my local host everything is ok since the json file load very fast and before the button loads and so everything works ok. but when i use the esp8266 server the response of the get request come somewhat after the button is loaded and so when it received and the fields get populated with the values the button disappear and remain only a word with a link.
basically the button is working on my localhost... so the innerhtml and everything is ok.. it seems i need to find a way to reload the css or something by the javascript to get the button box alive again.
is there a way to reload the button?
the .json file is just this: getajx.json
{"temp1":"1", "energia":"2", "energiatotal":"3", "tem":"2", "cliente":"22", "usuario":"22"}
you can test on your localhost by placing this getajx.json file having that content in the same directory of the html page is going to work... but i need to know how to make it work if the get request get too long.. please any help???
i tried to add a flag after the response is positive and activate the reloadCss funtion with it but didnt worked
<script>
var temp1, energia, energiatotal, tem, cliente, usuario ;
var ok=0;
function GetAjx() {
var xmlhttp = new XMLHttpRequest();
xmlhttp.onreadystatechange = function() {
if (this.readyState == 4 && this.status == 200) {
ok = 1;
var myObj = JSON.parse(this.responseText);
document.getElementById("temp1").innerHTML = myObj.temp1;
document.getElementById("energiatotal").innerHTML = myObj.energiatotal;
document.getElementById("tem").innerHTML = myObj.tem;
}};
if (ok =1) { function reloadCss(){
var links = document.getElementsByTagName("link");
for (var cl in links){
var link = links[cl];
if (link.rel === "stylesheet")
link.href += "";
}}
};
xmlhttp.open("GET", "getajx.json" , true);
xmlhttp.send();
}
</script>
I found a solution...
apparently i had to change the order of the scripts making the jquery ajax source load first and also tried to put the script from facebook at the bottom of the page...
i took away the reload scripts and other facebook scripts i was testing
and most importantly made the get request false to make it sync instead of async this forces the page to wait for the get request finish..strangely i tried it before and didnt worked this solution perhaps because of the order of the scripts.. can anyone comment on that?
xmlhttp.open("GET", "getajx.json" , false);
was helpful using the f12 on google chrome in specific the performance tab
i decided to not mess up with priority of the scripts even if i tried
any comments i would appreciate
again
thanks for the help
tl;dr: A bookmarklet that opens in a new tab: random link (with specified multiple html-classes) from a specified domain and code that works with current logins. Thank you.
short version of butchered code:
javascript:
(
var % 20 site = domain.com
function() {
window.location.host == site
void(window.open(document.links[Math.floor(document.querySelectorAll("a.class1, a.class2"))].href, '_blank'))
}();
//beautified with: http://jsbeautifier.org/
To whom it may concern:
I have searched around for a while and even considered switching services but although some come close or are similar to my particular request, none have served to address everything the request entails.
Execute the script on a specific domain even when no page from said domain is currently open. If login authentication for attaining the information or data for execution is required, read or work in conjunction with existing session.
Fetch from a specific domain host, a random link out of all links on that domain with a certain html-class (or indeed otherwise) using preferably, css-selectors.
Open the results in a new tab.
From butchering such similarities, the result became something like this:
//bookmarklet
javascript:
//anonymous function+ wrapped code before execution
(
// function global variables for quick substitution
var %20 site = domain.com
function(){
//set domain for script execution
window.location.host == site
//open new tab for
void(window.open(document.links
//random link
[Math.floor
//with specific classes (elements found with css selectors)
(document.querySelectorAll("a.class1, a.class2"))
]//end random-query
.href,'_blank' //end page-open
)//end link-open
)//end "void"
}//end function defintion
//execute
();
//(tried) checked with:
//http://www.javascriptlint.com/online_lint.php
Lastly, i have attained at most, basic css knowledge. I apologise if this request has anybody headdesking, palming or otherwise in gtfo mode. It is only too sad there is apparently no tag for "Warning: I DIY-ed this stuff" in StackExchange. However, i still would like answers that go into a bit of depth of explaining why and what each correction and modification is.
Thank you presently, for your time and effort.
Theoretically, the following code should do what you want:
window.addEventListener('load', function ( ) {
var query = 'a.class1[href], a.class2[href]';
var candidates = document.querySelectorAll(query);
var choice = Math.floor(Math.random() * candidates.length);
window.open(candidates.item(choice).href, 'randomtab');
}, true);
window.location.href = 'http://domain.com';
But it doesn't, because the possibility to retain event listeners across a page unload could be abused and browsers protect you against such abuse.
Instead, you can manually load the domain of your choice and then click a simpler bookmarklet with the following code:
var query = 'a.class1[href], a.class2[href]';
var candidates = document.querySelectorAll(query);
var choice = Math.floor(Math.random() * candidates.length);
window.open(candidates.item(choice).href, 'randomtab');
You could wrap the above in javascript:(function ( ) { ... })(); and minify as before, but it already works if you just minify it and only slap a javascript: in front.
I understand your situation of being an absolute beginner and posting "DIY" code, but I'm still not going to explain step-by-step why this code works and yours doesn't. The first version of the code above is complex to explain to a beginner, and the list of issues with the code in the question is too long to discuss all of them. You'll be better off by studying more Javascript; a good resource with tutorials is MDN.
I'm trying to get to grips with the Firefox addon SDK (previously known as Jetpack from what I understand), but I'm having problems working with the DOM.
I need to iterate over all of the text nodes in the DOM when a web page loads and make changes to some of the strings that they contain. I've posted a simplified version of what I'm doing below (new to Javascript, so forgive me any oddities).
// test.js
function parseElement(Element)
{
if (Element == null)
return;
var i = 0;
var Result = false;
if (Element.hasChildNodes)
{
var children = Element.childNodes;
while (i <= children.length - 1)
{
var child = children.item(i);
parseElement(child);
i++;
}
}
if (Element.nodeType == 3)
{
// For testing - see what the text node contains
alert(Element.nodeValue);
Result = true;
}
return Result;
}
window.addEventListener("load", function load(event)
{
window.removeEventListener("load", load, false);
parseElement(document.body);
}
When I create a basic HTML document:
<!-- test.html -->
<html>
<head>
<script type="text/javascript" src="test.js"></script>
</head>
<body>
<b>hello world</b>
<p>foo</p>
<i>test</i>
</body>
</html>
...include this Javascript file in the HEAD section then open it in Firefox, the "alert" displays 6 dialog boxes containing:
1) "hello world"
2) blank -> no visible characters, just a newline
3) "foo"
4) blank -> no visible characters, just a newline
5) "test"
6) blank -> no visible characters, just a newline
Exactly what I would expect to see.
The problem arises when I create an addon and use test.js as a page-mod Content Script from my main.js file (modified to remove the "addEventListener" part). When I use "cfx run" to start Firefox with my addon installed, then open the same HTML document (with the "script" part for the test.js file commented), the alerts do not display at all.
So that's the first puzzle. But having also navigated to other web pages - for example, a YouTube video page - the alert DOES display several dialogs, but they include very strange strings, mostly the content of script tags:
EDIT I don't have enough reputation to embed an image, so here's a link instead showing the sort of thing I mean instead: http://img46.imageshack.us/img46/5994/mtpd.jpg
And again, the text I would expect to see is absent.
Apologies for some of the redundancy below, but just to be clear: this is my main.js:
main.js
var data = require("sdk/self").data;
var data = require("sdk/self").data;
exports.main = function()
{
pageMod.PageMod({
include: "*",
contentScriptFile: [data.url("test.js")]
});
}
And the modified version of the Javascript file is identical to the "test.js" listing above, but for the end part:
test.js
<snip>
...
return Result;
}
parseElement(document.body);
I've included my project files (if I can call them that) in a zip if it makes things easier to visualise: http://www.mediafire.com/?774iprbngtlgkcp
I've tried changing
parseElement(document.body);
to
parseElement(unsafeWindow.document.body);
in case it makes any difference, but the outcome is identical.
So I'm very puzzled about what's happening. I can't understand why the test.js file isn't picking out the text nodes (and only the text nodes) from the DOM when I use it as part of an addon, but does exactly what I would anticipate when included as a script in a HTML document. Can anyone shed any light on this?
Thank you in advance.
Errors in your lib code and contentScripts are usually logged to the Error Console. Check what is printed there. Also see the SDK console module.
Your page-mod won't run because by default page-mods will run only after the load event.
See the contentScriptWhen documentation.
script tags actually often have a text-node child containing the inline script source. So it is absolutely normal that those are enumerated as well.
For some discussion about walking tree nodes, see: getElementsByTagName() equivalent for textNodes
However, if you're after the text of specific ids/classes, consider using document.querySelector/.querySelectorAll, or if you're after nodes that have a specific XPath, use document.evaluate. This very likely will be a lot faster.
Other than that, I cannot really tell what exactly your remaining issues are and what you're trying to achieve in the first place exactly, so I cannot advice on that.
You wondered that
I've discovered that my add-on is NOT executed when a document is
accessed via File->Open File.
That is by design. At match-pattern, it says that
A single asterisk matches any URL with an http, https, or ftp scheme.
For other schemes like file, resource, or data, use a scheme followed
by an asterisk, as below.
You can use the regular expression /.*/ to match all sites and all schemas.
I use this script (from dynamicdrive) to dynamically fill div with id:
var bustcachevar=1 //bust potential caching of external pages after initial request? (1=yes, 0=no)
var loadedobjects=""
var rootdomain="http://"+window.location.hostname
var bustcacheparameter=""
function ajaxpage(url, containerid){
var page_request = false
if (window.XMLHttpRequest) // if Mozilla, Safari etc
page_request = new XMLHttpRequest();
else if (window.ActiveXObject){ // if IE
try {
page_request = new ActiveXObject("Msxml2.XMLHTTP");
}
catch (e){
try{
page_request = new ActiveXObject("Microsoft.XMLHTTP");
}
catch (e){}
}
}
else
return false
page_request.onreadystatechange=function(){
loadpage(page_request, containerid)
}
if (bustcachevar) //if bust caching of external page
bustcacheparameter=(url.indexOf("?")!=-1)? "&"+new Date().getTime() : "?"+new Date().getTime()
page_request.open('GET', url+bustcacheparameter, true)
page_request.send(null)
}
function loadpage(page_request, containerid){
if (page_request.readyState == 4 && (page_request.status==200 || window.location.href.indexOf("http")==-1))
document.getElementById(containerid).innerHTML=page_request.responseText
}
Everything works fine until I load a page with for example a euro-sign in it.
Codepage's are set correctly on the page but it displays a questionmark.
I don't know enough javascript to fix this problem.
Thanks in advance for any advice!
NOTE: Thanks to friend I now know that saving the file you want to load using this script in UTF-8 fixes the problem. But I can't be sure that every page I load is UTF-8 encoded so my question is:
is there a way for the script to set the right charset? Is there a way to let the script adapt to the codepage of the file you want to load?
It seems like you have an encoding problem somewhere.
I highly suggest you to use UTF-8 everywhere as it is the established standard for the web. Check that the page doing the ajax call and the dynamically loaded page are encoded in UTF-8 and sent by the server with correct headers (the headers should contain something like Content-type: text/html; charset=UTF-8).
Also it is a best practice to replace exotic characters by their html firendly code in html pages to avoid such issues. Use € for €.
This is my hypothesis (and I think it's been confirmed by your updates):
When you write the remote document you are loading, you just open your editor, hit the € symbol in your keyboard and save. Since you never picked any encoding, your editor used the ANSI code page. And here's the issue: the ANSI code page basically depends on where you live. In Western Europe, Win-1252 is a popular choice and encodes the euro symbol as 0x80.
When you write the target HTML doc where you want to insert it, you do exactly the same and get a Win-1252 document. However, the webserver doesn't know what the encoding is. Many times, it'll default to something like ISO-8859-1 and it happens that ISO-8859-1 does not even have an euro symbol!
JavaScript reads 0x80 and writes 0x80.
The browser finds 0x80 in an HTML document that's supposedly ISO-8859-1. In such encoding, the 0x80 is actually blank.
So you don't have to fix your JavaScript code (there's nothing fixable there, mainly because there's nothing wrong there). You need to find out what your site's encoding is and generate files that actually use such encoding (advanced editors will let you choose).
A slew of pages I've written for one of my web projects share some 144 identical lines of code, reproduced in each file. If I update one of those lines, I have to go back through ALL of the pages that share the code and update for each page. Is there a straightforward way to include HTML from a separate file?
And for bonus points, since so many pages use this code, it would be nice not to have to reload it for each page. Is there an easy way to store it in the browser's cache or only load the "content" section of the pages while leaving the rest of the page static?
Fountains of Thanks for any wisdom on this.
Mike
To include HTML from a separate file, use SSI (Server-Side Includes). This requires SSI support to be installed on the server, however.
You would write something like this in your files:
<!--#include file="included.html" -->
and that would include the file included.html when the page is accessed.
To load only the content of each page, use the XMLHTTPRequest object from JavaScript:
function LoadContent(url)
{
if (typeof(XMLHttpRequest) == "undefined")
{
try
{
xmlhttp = new ActiveXObject("Msxml2.XMLHTTP");
}
catch(e)
{
try
{
xmlhttp = new ActiveXObject("Microsoft.XMLHTTP");
}
catch(e)
{
// fallback for browsers without XMLHttpRequest
window.location.href = "no-ajax.php?url="+escape(url);
return;
}
}
}
else
{
xmlhttp = new XMLHttpRequest();
}
xmlhttp.open("GET", url, false); // this request will be synchronous (will pause the script)
xmlhttp.send();
if(xmlhttp.status > 399) // 1xx, 2xx and 3xx status codes are not errors
{
// put error handling here
}
document.getElementById("content").innerHTML = xmlhttp.responseText;
}
If we're assuming that you're talking straight html pages, with no server code (asp.net, php, or server side include ability), then in order to do both the including and the caching, you're going to need to use an iframe.
Each of your pages that duplicate the 144 lines of content should replace it with an iframe like so:
<iframe src="pagewithcontent.html"></iframe>
pagewithcontent.html would obviously be where you move the content to. The browser will cache the page appropriately, so that each parent page will simply get the shared content without making another request.
There's an article here that goes into great depth about html includes, and some javascript methods of doing it. I would strongly recommend against the javascript methods.
My answer reflects the assumption that you can't do anything on the server side. However, by far the best solution is to do so if you can.