I'm working on a Chrome Extension (no knowledge required for this question...) and when I visit a page with a certain domain, I have a script that is ran. All is does is grab the attribute value from a <meta> tag in the page:
$('meta[itemprop=contentURL]').attr('content')
This works fine on the first page load. However, located within the page there is also links to related content. If I click one of the related links, the Chrome spinner spins a bit, loads the new content, and updates the URL in the address bar.
However, if I try the above jQuery, I get the old attribute value, not the new one on the new page. Upon using Chrome's Inspect Element, I see that the old attribute value is there, but the new one is there if I use view page source instead.
So it seems that the DOM is old...is there a good way to get an updated DOM? This question goes along with all of the other questions of DOM vs Page Source are different threads that I've looked at but didn't get any answers from.
Is there a good way to get a new DOM with the updated attribute? Thanks.
Edit: Here's what the chrome extension code looks like:
chrome.webNavigation.onHistoryStateUpdated.addListener(function(details) {
// request current page , `cache:false`
$.ajax({url:window.location.href, cache:false})
.done(function(data) {
var content = $(data).filter("meta[itemprop=contentURL]").attr("content");
console.log(content);
});
});
The above code logs undefined. Still looking for a good workaround unless the proposed solution is the best one.
Edit, updated
Try
v2 (javascript)
function done() {
var div = document.createElement("div");
var content = this.responseText;
div.innerHTML = content;
content = div.querySelectorAll("meta[itemprop*=contentURL]")[0].content;
console.log(content);
}
var request = new XMLHttpRequest();
request.onload = done;
request.open("GET", window.location.href + "?=" + (new Date().getTime()), false);
request.send();
v1 (javascript utilizing jquery)
// request current page , `cache:false`
$.ajax({url:window.location.href, cache:false})
.done(function(data) {
var content = $(data).filter("meta[itemprop=contentURL]").attr("content");
console.log(content);
});
See jQuery-ajax-settings at cache
Related
I'm simply using an example from a book I'm reading. The example is labeled, "Loading HTML with Ajax." This is the JS portion of the code:
var xhr = new XMLHttpRequest();
xhr.onload = function() {
if(xhr.status === 200) {
document.getElementById('content').innerHTML = xhr.responseText;
}
};
xhr.open('GET', 'data/data.html', true);
xhr.send(null);
I'm getting the CSS portion of the code (headers, etc.) when I load the page onto the browser but none of the JS (there should be maps which would load onto the page). The example says I should comment out this portion of the code above:
xhr.onload = function() {
if(xhr.status === 200) {
document.getElementById('content').innerHTML = xhr.responseText;
...if I'm running the code locally without a server but that's not working, either.
Is using XMLHttpRequest() an outdated way to make an Ajax call?
Yes, but it still works and that's not the problem. The more modern way is fetch.
I'm getting the CSS portion of the code (headers, etc.) when I load the page onto the browser but none of the JS (there should be maps which would load onto the page).
That's because assigning HTML that contains script tags to innerHTML doesn't run the script defined by those tags. The script tags are effectively ignored.
To run those scripts, you'll need to find them in the result and then recreate them, something along these lines:
var content = document.getElementById('content');
content.innerHTML = xhr.responseText;
content.querySelectorAll("script").forEach(function(script) {
var newScript = document.createElement("script");
newScript.type = script.type;
if (script.src) {
newScript.src = script.src;
} else {
newScript.textContent = script.textContent;
}
document.body.appendChild(newScript);
});
Note that this is not the same as loading the page with script elements in it directly. The code within script tags without async or defer or type="module" is executed immediately when the closing script tag is encountered when loading a page directly (so that the loaded script can use document.write to output to the HTML stream; this is very mid 1990s). Whereas in the above, they're run afterward.
Note that on older browsers, querySelectorAll's NodeList may not have forEach, that was added just a few years ago. See my answer here if you need to polyfill it.
Because I didn't completely understand T.J.'s answer (no offense, T.J.), I wanted to provide a simple answer for anyone who might be reading this. I only recently found this answer on Mozilla.org: How do you set up a local testing server? (https://developer.mozilla.org/en-US/docs/Learn/Common_questions/set_up_a_local_testing_server). I won't go into details, I'll just leave the answer up to Mozilla. (Scroll down the page to the section titled, "Running a simple local HTTP server.")
after get request arrive the values are there and the button disappear
this is the button working on the localhost..
Im trying to add a Facebook share button in a page with dynamic contents like userid, coming from a esp826 server, using java script like innerhtml by ajax call get request once of json when the page body is on-load. When i test in my local host everything is ok since the json file load very fast and before the button loads and so everything works ok. but when i use the esp8266 server the response of the get request come somewhat after the button is loaded and so when it received and the fields get populated with the values the button disappear and remain only a word with a link.
basically the button is working on my localhost... so the innerhtml and everything is ok.. it seems i need to find a way to reload the css or something by the javascript to get the button box alive again.
is there a way to reload the button?
the .json file is just this: getajx.json
{"temp1":"1", "energia":"2", "energiatotal":"3", "tem":"2", "cliente":"22", "usuario":"22"}
you can test on your localhost by placing this getajx.json file having that content in the same directory of the html page is going to work... but i need to know how to make it work if the get request get too long.. please any help???
i tried to add a flag after the response is positive and activate the reloadCss funtion with it but didnt worked
<script>
var temp1, energia, energiatotal, tem, cliente, usuario ;
var ok=0;
function GetAjx() {
var xmlhttp = new XMLHttpRequest();
xmlhttp.onreadystatechange = function() {
if (this.readyState == 4 && this.status == 200) {
ok = 1;
var myObj = JSON.parse(this.responseText);
document.getElementById("temp1").innerHTML = myObj.temp1;
document.getElementById("energiatotal").innerHTML = myObj.energiatotal;
document.getElementById("tem").innerHTML = myObj.tem;
}};
if (ok =1) { function reloadCss(){
var links = document.getElementsByTagName("link");
for (var cl in links){
var link = links[cl];
if (link.rel === "stylesheet")
link.href += "";
}}
};
xmlhttp.open("GET", "getajx.json" , true);
xmlhttp.send();
}
</script>
I found a solution...
apparently i had to change the order of the scripts making the jquery ajax source load first and also tried to put the script from facebook at the bottom of the page...
i took away the reload scripts and other facebook scripts i was testing
and most importantly made the get request false to make it sync instead of async this forces the page to wait for the get request finish..strangely i tried it before and didnt worked this solution perhaps because of the order of the scripts.. can anyone comment on that?
xmlhttp.open("GET", "getajx.json" , false);
was helpful using the f12 on google chrome in specific the performance tab
i decided to not mess up with priority of the scripts even if i tried
any comments i would appreciate
again
thanks for the help
I have a blog set up with homepage that loads the blog entries into the page when the title is clicked (using a javascript function). I would like to be able to send someone a link to a specific entry. In essence is there a way to have a URL that runs enters the the article name into the function. I know this probably doesn't exist so is there a way to achieve this, if not how should I set up my website so that I don't have 50 replications of the 'shell' at each url with the only thing changing is the text inside one element.
Here is a truncated version of the code.
function addArticle (articleName){
var ourRequest = new XMLHttpRequest();
currentArticleId = articleName;
ourRequest.open('GET', 'https://x.neocities.org/' + articleName +'.html');
ourRequest.onload = function(){
var blogPost = ourRequest.responseText;
document.getElementById("on_display").innerHTML = blogPost;
};
ourRequest.send();
}
Note: I have each article stored in its own url which this script grabs.
Thanks for the help!
I have a functional wordpress theme that loads content via ajax. One issue that I'm having though is that when pages are loaded directly the ajax script no longer works. For example the link structure works as follows, while on www.example.com and the about page link is clicked then the link becomes www.example.com/#/about. But when I directly load the standalone page www.example.com/about, the other links clicked from this page turn into www.example.com/about/#/otherlinks. I modified the code a little bit from this tutuorial http://www.deluxeblogtips.com/2010/05/how-to-ajaxify-wordpress-theme.html. Here is my code. Thanks for the help.
jQuery(document).ready(function($) {
var $mainContent = $("#container"),
siteUrl = "http://" + top.location.host.toString(),
url = '';
$(document).delegate("a[href^='"+siteUrl+"']:not([href*=/wp-admin/]):not([href*=/wp-login.php]):not([href$=/feed/]))", "click", function() {
location.hash = this.pathname;
return false;
});
$(window).bind('hashchange', function(){
url = window.location.hash.substring(1);
if (!url) {
return;
}
url = url + " #ajaxContent";
$mainContent.fadeOut(function() {
$mainContent.load(url,function(){
$mainContent.fadeIn();
});
});
});
$(window).trigger('hashchange');
});
The problem you are expressing is not easily solved. There are multiple factors at stake but it boils down to this :
Any changes to a URL will trigger a page reload
Only exception is if only the hash part of the URL changes
As you can tell there is no hash part in the URL www.example.com/about/. Consequently, this part cannot be changed by your script, or else it will trigger page reload.
Knowing about that fact, your script will only change the URL by adding a new hash part or modifying the existing one, while leaving alone the "pathname" part of the URL. And so you get URLs like www.example.com/about/#/otherlinks.
Now, from my point of view there are two ways to solve your problem.
First, there is an API that can modify the whole URL pathame without reload, but it's not available everywhere. Using this solution and falling back to classical page reload for older browser is the cleaner method.
Else, you can force the page reload just once to reset the URL to www.example.com/ and start off from a good basis. Here is the code to do so :
$(document).delegate("a[href^='"+siteUrl+"']:not([href*=/wp-admin/]):not([href*=/wp-login.php]):not([href$=/feed/]))", "click", function() {
location = location.assign('#' + this.pathname);
return false;
});
It should be noted that this script won't work if your site is not at the root of the pathname. So for it to work for www.example.com/mysite/, you will need changes in the regex.
Please let me know how it went.
Let's say I have a web page (/index.html) that contains the following
<li>
<div>item1</div>
details
</li>
and I would like to have some javascript on /index.html to load that
/details/item1.html page and extract some information from that page.
The page /details/item1.html might contain things like
<div id="some_id">
picture
map
</div>
My task is to write a greasemonkey script, so changing anything serverside is not an option.
To summarize, javascript is running on /index.html and I would
like to have the javascript code to add some information on /index.html
extracted from both /index.html and /details/item1.html.
My question is how to fetch information from /details/item1.html.
I currently have written code to extract the link (e.g. /details/item1.html)
and pass this on to a method that should extract the wanted information (at first
just .innerHTML from the some_id div is ok, I can process futher later).
The following is my current attempt, but it does not work. Any suggestions?
function get_information(link)
{
var obj = document.createElement('object');
obj.data = link;
document.getElementsByTagName('body')[0].appendChild(obj)
var some_id = document.getElementById('some_id');
if (! some_id) {
alert("some_id == NULL");
return "";
}
return some_id.innerHTML;
}
First:
function get_information(link, callback) {
var xhr = new XMLHttpRequest();
xhr.open("GET", link, true);
xhr.onreadystatechange = function() {
if (xhr.readyState === 4) {
callback(xhr.responseText);
}
};
xhr.send(null);
}
then
get_information("/details/item1.html", function(text) {
var div = document.createElement("div");
div.innerHTML = text;
// Do something with the div here, like inserting it into the page
});
I have not tested any of this - off the top of my head. YMMV
As only one page exists in the client (browser) at a time and all other (virtual/possible) pages are on the server, how will you get information from another page using JavaScript as you will have to interact with the server at some point to retrieve the second page?
If you can, integrate some AJAX-request to load the second page (and parse it), but if that's not an option, I'd say you'll have to load all pages that you want to extract information from at the same time, hide the bits you don't want to show (in hidden DIVs?) and then get your index (or whoever controls the view) to retrieve the needed information from there ... even though that sounds pretty creepy ;)
You can load the page in a hidden iframe and use normal DOM manipulation to extract the results, or get the text of the page via AJAX, grab the part between <body...>...</body>ยจ and temporarily inject it into a div. (The second might fail for some exotic elements like ins.) I would expect Greasemonkey to have more powerful functions than normal Javascript for stuff like that, though - it might be worth to thumb through the documentation.