This question already has answers here:
Sequencing ajax requests
(10 answers)
Closed 7 years ago.
I'm working on a small text game in js, and the easiest way I found to have save text is to use text files. I can order them in different folders, they really light and they're easily identifiable and editable in case I need to make changes.
I'm loading them using ajax
$.ajax({
url: 'text/example.txt',
success: function(text){
document.getElementById("main").innerHTML = document.getElementById("main").innerHTML + text;
}
});
As it was suggested to me in another thread.
And honestly, so far it's been working pretty well, in single-cases scenarios. When only one TXT file needs to be displayed there are literally no problems. But, unfortunately in cases where a lot of different files need to be displayed in a correct order (let's say around 10 different files), the text gets messed up and loads out of order. I'm going to suppose this is because it just can't fetch the txt file fast enough.
So at this point I'm really not too sure what to do.
Is there a way to get my script to wait before printing the next piece of text before displaying one that still hasn't loaded?
Maybe a way to load all the txt files when the site is accessed?
My knowledge is pretty limited so I'm really not sure how this could be fixed.
Tried searching google and stackoverflow, none of the threads I found are helping me, perhaps because I'm really not an expert.
You can achieve with callback, the following way will call ajax one by one after they finish:
//setup an array of AJAX url
var ajaxes = [{ url : 'text/example.txt'}, { url : 'text/example1.txt'}],
current = 0;
//declare your function to run AJAX requests
function do_ajax() {
//check to make sure there are more requests to make
if (current < ajaxes.length) {
//make the AJAX request with the given data from the `ajaxes` array of objects
$.ajax({
url : ajaxes[current].url,
success : function (text) {
document.getElementById("main").innerHTML = document.getElementById("main").innerHTML + text;
//increment the `current` counter and recursively call this function again
current++;
do_ajax();
}
});
}
}
//run the AJAX function for the first time when you want
do_ajax();
Related
I've seen some answers to this that refer the askee to other libraries (like phantom.js), but I'm here wondering if it is at all possible to do this in just node.js?
Considering my code below. It requests a webpage using request, then using cheerio it explores the dom to scrape the page for data. It works flawlessly and if everything had gone as planned, I believe it would have outputted a file as i imagined in my head.
The problem is that the page I am requesting in order to scrape, build the table im looking at asynchronously using either ajax or jsonp, i'm not entirely sure how .jsp pages work.
So here I am trying to find a way to "wait" for this data to load before I scrape the data for my new file.
var cheerio = require('cheerio'),
request = require('request'),
fs = require('fs');
// Go to the page in question
request({
method: 'GET',
url: 'http://www1.chineseshipping.com.cn/en/indices/cbcfinew.jsp'
}, function(err, response, body) {
if (err) return console.error(err);
// Tell Cherrio to load the HTML
$ = cheerio.load(body);
// Create an empty object to write to the file later
var toSort = {}
// Itterate over DOM and fill the toSort object
$('#emb table td.list_right').each(function() {
var row = $(this).parent();
toSort[$(this).text()] = {
[$("#lastdate").text()]: $(row).find(".idx1").html(),
[$("#currdate").text()]: $(row).find(".idx2").html()
}
});
//Write/overwrite a new file
var stream = fs.createWriteStream("/tmp/shipping.txt");
var toWrite = "";
stream.once('open', function(fd) {
toWrite += "{\r\n"
for(i in toSort){
toWrite += "\t" + i + ": { \r\n";
for(j in toSort[i]){
toWrite += "\t\t" + j + ":" + toSort[i][j] + ",\r\n";
}
toWrite += "\t" + "}, \r\n";
}
toWrite += "}"
stream.write(toWrite)
stream.end();
});
});
The expected result is a text file with information formatted like a JSON object.
It should look something like different instances of this
"QINHUANGDAO - GUANGZHOU (50,000-60,000DWT)": {
"2016-09-29": 26.7,
"2016-09-30": 26.8,
},
But since the name is the only thing that doesn't load async, (the dates and values are async) I get a messed up object.
I tried Actually just setting a setTimeout in various places in the code. The script will only be touched by developers that can afford to run the script several times if it fails a few times. So while not ideal, even a setTimeout (up to maybe 5 seconds) would be good enough.
It turns out the settimeouts don't work. I suspect that once I request the page, I'm stuck with the snapshot of the page "as is" when I receive it, and I'm in fact not looking at a live thing I can wait for to load its dynamic content.
I've wondered investigating how to intercept the packages as they come, but I don't understand HTTP well enough to know where to start.
The setTimeout will not make any difference even if you increase it to an hour. The problem here is that you are making a request against this url:
http://www1.chineseshipping.com.cn/en/indices/cbcfinew.jsp
and their server returns back the html and in this html there are the js and css imports. This is the end of your case, you just have the html and that's it. Instead the browser knows how to use and to parse the html document, so it is able to understand the javascript scripts and to execute/run them and this is exactly your problem. Your program is not able to understand that has something to do with the HTML contents. You need to find or to write a scraper that is able to run javascript. I just found this similar issue on stackoverflow:
Web-scraping JavaScript page with Python
The guy there suggests https://github.com/niklasb/dryscrape and it seems that this tool is able to run javascript. It is written in python though.
You are trying to scrape the original page that doesn't include the data you need.
When the page is loaded, browser evaluates JS code it includes, and this code knows where and how to get the data.
The first option is to evaluate the same code, like PhantomJS do.
The other (and you seem to be interested in it) is to investigate the page's network activity and to understand what additional requests you should perform to get the data you need.
In your case, these are:
http://index.chineseshipping.com.cn/servlet/cbfiDailyGetContrast?SpecifiedDate=&jc=jsonp1475577615267&_=1475577619626
and
http://index.chineseshipping.com.cn/servlet/allGetCurrentComposites?date=Tue%20Oct%2004%202016%2013:40:20%20GMT+0300%20(MSK)&jc=jsonp1475577615268&_=1475577620325
In both requests:
_ is a decache parameter to prevent caching.
jc is a name of a JS wrapper function which should be invoked with the result (https://en.wikipedia.org/wiki/JSONP)
So, scrapping the table template at http://www1.chineseshipping.com.cn/en/indices/cbcfinew.jsp and performing two additional requests you will be able to combine them into the same data structure you see in the browser.
Is it possible using Javascript/Html5 to check the http reply code for a particular web page? For example, if the user enters a sentence, I wish to check to see if I have an audio file for each word in the sentence. I realize I should use a database to lookup the availability of a word but I'm currently working on a very simple demo which currently consists of a single html file and a bunch of ogg files.
The current versions of the XHR for some time now actually have allowed one to do form submissions through them. I've been using that for a while now to do javascript RPC to MySql and it works like a champ. Of course as always XHR has that HTTP status code your looking for.
Just scope out the current docs for XMLHTTP over at the Mozilla site and you'll have that kicking like you want in no time
You could try using ajax to achieve that. Here is a simple example using the jquery ajax function:
// Assuming this is the word you are looking for
var str = "bells";
$.ajax( "www.example.com/audio/" + str + ".ogg" ).done(function () {
// An audio file for that word exists
}).fail( function () {
// There is no audio file for that word
});
For a whole sentence you could just split it into words and look each one up individually.
XMLHttpRequest is definitely the way to go, assuming you're not going to use a database (which I would highly recommend because they're totally awesome.)
function oggSearch () {
alert(this.responseText);
};
var sInput = ''; //grab it from whatever element you're using
var oggReq = new XMLHttpRequest();
oggReq.onload = oggSearch;
//I'm not sure if concatenating the file type will cause issue
oggReq.open("get", sInput + ".ogg", true);
oggReq.send();
I obviously haven't tested the above, but it's based off the link below.
https://developer.mozilla.org/en-US/docs/DOM/XMLHttpRequest/Using_XMLHttpRequest
Sorry if this is a simple question (or answered before), but could anyone tell me how to download a page with JScript? I want to use Javascript to download the page "example.php" on my server every five seconds and compare it with what it was before. If the page changes, I would like the Javascript code to refresh the page hosting the Javascript. Does that make sense ? Somethingl ike:
string downloaded = DownloadPage("example.php");
timer x = new timer(5);
when the timer goes off:
if(DownloadPage("example.php") == downloaded){
RefreshPage();
}
Thanks, and sorry this was probably such an easy question :)
Using the jQuery framework's get function:
$.get('example', function(data) {
if(data == "something")
//perform action
});
I've spent days on this and hit it from every angle I can think of. I'm working on a simple windows 7 gadget. This script will pull JSON data from a remote web server and put it on the page. I'm using jQuery 1.6.2 for the $.getJSON. Script consumes more memory each loop.
var count = 1;
$(document).ready(function () {
updateView();
});
function updateView(){
$("#junk").html(count);
count++;
$.getJSON( URL + "&callback=?", populateView);
setTimeout( updateView, 1000 );
}
function populateView(status) {
$("#debug").html(status.queue.mbleft + " MB Remaining<br>" + status.queue.mb + " MB Total");
}
Any help would be greatly appreciated....Thank you!
EDIT: Add JSON data sample
?({"queue":{"active_lang":"en","paused":true,"session":"39ad74939e89e6408f98998adfbae1e2","restart_req":false,"power_options":true,"slots":[{"status":"Queued","index":0,"eta":"unknown","missing":0,"avg_age":"2d","script":"None","msgid":"","verbosity":"","mb":"8949.88","sizeleft":"976 MB","filename":"TestFile#1","priority":"Normal","cat":"*","mbleft":"975.75","timeleft":"0:00:00","percentage":"89","nzo_id":"-n3c6z","unpackopts":"3","size":"8.7 GB"}],"speed":"0 ","helpuri":"","size":"8.7 GB","uptime":"2d","refresh_rate":"","limit":0,"isverbose":false,"start":0,"version":"0.6.5","new_rel_url":"","diskspacetotal2":"931.51","color_scheme":"gold","diskspacetotal1":"931.51","nt":true,"status":"Paused","last_warning":"","have_warnings":"0","cache_art":"0","sizeleft":"976 MB","finishaction":null,"paused_all":false,"cache_size":"0 B","finish":0,"new_release":"","pause_int":"0","mbleft":"975.75","diskspace1":"668.52","scripts":[],"categories":["*"],"darwin":false,"timeleft":"0:00:00","mb":"8949.88","noofslots":1,"nbDetails":false,"eta":"unknown","quota":"","loadavg":"","cache_max":"0","kbpersec":"0.00","speedlimit":"","webdir":"","queue_details":"0","diskspace2":"668.52"}})
EDIT 2: Stripped code down to this and it still leaks. I think that eliminates traversing the DOM as a contributor.
$(document).ready(function () {
setInterval(updateView, 1000);
});
function updateView(){
$.getJSON( URL + "&callback=?", populateView);
}
function populateView(status) {
}
EDIT 3: It's not jQuery. I removed jQuery and did it with straight js. Still leaks.
function init(){
setInterval(updateView, 1000);
}
function updateView(){
var xhr = new XMLHttpRequest();
xhr.open("GET", URL, false);
xhr.setRequestHeader( "If-Modified-Since", "0");
xhr.send('');
}
So...if it's not jQuery, not just in IE (Chrome too). What the heck?! Ideas?
Thank you!
Edit 2:
If it's actually taskmanager showing the leak here, then I think the next step is to investigate IE, as I believe that IE is then engine used to host Windows Widgets.
If you can recreate your script in a little html file you can run this tool and have a look if it's IE that's doing it:
http://blogs.msdn.com/b/gpde/archive/2009/08/03/javascript-memory-leak-detector-v2.aspx
Also, are you running IE8 or 9 ?
Edit:
Based on the JSON string in the Op; basically the problem is misleading here.
the bit of javascript posted is working perfectly fine.
The Server producing the JSON is the one that's showing a difference in memory usage, I would investigate the website/endpoint that's creating that JSON and seeing what the issue is.
Just had a thought,
$.getJSON is just a shorthand function for jQuery's $.ajax call.
I wonder if it makes a different if you change your code to use $.ajax but specifically add the cache mechanism to it:
$.ajax({
url: URL + "&callback=?",
dataType: 'json',
cache: false,
success: populateView
});
That might stop it trying to store it in memory perhaps, and depending on your browser, it might be showing more memory because you just haven't had your garbage collected, so to speak.
I have the feeling that the setTimeout function within the updateView is causing this behaviour. To test this you can modify your code to:
$(document).ready(function () {
setInterval(updateView, 1000);
});
function updateView(){
$("#junk").html(count);
count++;
$.getJSON( URL + "&callback=?", populateView);
}
function populateView(status) {
$("#debug").html(status.queue.mbleft + " MB Remaining<br>" + status.queue.mb + " MB Total");
}
EDIT: The setInterval function will execute the passed in function over and over every x miliseconds. Here to the docs.
EDIT 2:
Another performance loose (Although it might not be critical to the issue) is that you are traversing the DOM every second to find the $('#debug') element. You could store that and pass it in as:
$(document).ready(function () {
var debug = $('#debug');
var junk = $('#junk') ;
setInterval(function(){updateView(debug, junk)}, 1000);
});
function updateView(debug, junk){
junk.html(count);
count++;
$.getJSON( URL + "&callback=?", function(status){populateView(status,debug)});
}
function populateView(status) {
debug.html(status.queue.mbleft + " MB Remaining<br>" + status.queue.mb + " MB Total");
}
Edit 3: I have changed the code above because I forgot to take in the response from the server. Assuming that queue is a property of the returned JSON then the code should be as above.
Edit 4: This is a very interesting issue. Another approach then. Lets assume then that there is still some client side scripts that are clogging the memory. What could this be? As far as is I understand the only two things left are the setInterval and the $.getJSON function. The $.getJSON function is a simple ajax request wrapper which fires a request and waits for the response from the server. The setInterval function is a bit more peculiar one because it will set up timers, fire functions, etc.
I think if you manage to mimic this on your server or even just refresh this webpage in your browser every second/5 secs you you will be able to see whether it is the client or the server that processes your request.
Found this thread trying to find the underlying reason to this problem because I also had a similar problem recently although my memory would increase about 1Mb per minute... I pretty much isolated it to json parsing. Run the ajax command with type: 'text', and you should see that the memory gets cleaned up.
I found a library json_parse.js which recursively parses the json data using the JS engine (not eval). I manually parse the text data to json in the success callback and this works well.
Let's say I have a web page (/index.html) that contains the following
<li>
<div>item1</div>
details
</li>
and I would like to have some javascript on /index.html to load that
/details/item1.html page and extract some information from that page.
The page /details/item1.html might contain things like
<div id="some_id">
picture
map
</div>
My task is to write a greasemonkey script, so changing anything serverside is not an option.
To summarize, javascript is running on /index.html and I would
like to have the javascript code to add some information on /index.html
extracted from both /index.html and /details/item1.html.
My question is how to fetch information from /details/item1.html.
I currently have written code to extract the link (e.g. /details/item1.html)
and pass this on to a method that should extract the wanted information (at first
just .innerHTML from the some_id div is ok, I can process futher later).
The following is my current attempt, but it does not work. Any suggestions?
function get_information(link)
{
var obj = document.createElement('object');
obj.data = link;
document.getElementsByTagName('body')[0].appendChild(obj)
var some_id = document.getElementById('some_id');
if (! some_id) {
alert("some_id == NULL");
return "";
}
return some_id.innerHTML;
}
First:
function get_information(link, callback) {
var xhr = new XMLHttpRequest();
xhr.open("GET", link, true);
xhr.onreadystatechange = function() {
if (xhr.readyState === 4) {
callback(xhr.responseText);
}
};
xhr.send(null);
}
then
get_information("/details/item1.html", function(text) {
var div = document.createElement("div");
div.innerHTML = text;
// Do something with the div here, like inserting it into the page
});
I have not tested any of this - off the top of my head. YMMV
As only one page exists in the client (browser) at a time and all other (virtual/possible) pages are on the server, how will you get information from another page using JavaScript as you will have to interact with the server at some point to retrieve the second page?
If you can, integrate some AJAX-request to load the second page (and parse it), but if that's not an option, I'd say you'll have to load all pages that you want to extract information from at the same time, hide the bits you don't want to show (in hidden DIVs?) and then get your index (or whoever controls the view) to retrieve the needed information from there ... even though that sounds pretty creepy ;)
You can load the page in a hidden iframe and use normal DOM manipulation to extract the results, or get the text of the page via AJAX, grab the part between <body...>...</body>¨ and temporarily inject it into a div. (The second might fail for some exotic elements like ins.) I would expect Greasemonkey to have more powerful functions than normal Javascript for stuff like that, though - it might be worth to thumb through the documentation.