Refresh/Retrigger "Fetch" in a HTML/javascript - javascript

First things first: I don't know anything about coding and I basically build my whole HTML file by CMD+C and CMD+V what I found with lots of Google searches and change just what is needed so it fits into what I intended. Interestingly I got a result that is 95% what I wanted.
Now I have just one last thing I can't set (or find about in Google) so hopefully someone can answer it here. [Trying to put in as many information as I can]
I made a simple one page HTML that shows the date/time and plays an audio livestream from my PC when opened.
I also want it to display the "Now Playing" information. After a lot of searches, I finally found a solution that even I could make with Dreamweaver.
I used the "fetch script" (or is it called Fetch APP?) to get a txt file that my music player gives as output with the current song information. That fetch script get the data and put it into a
The problem is that is only seems to do it once at page load and not every few seconds. The contents in the txt change whenever a new song plays and I want the displayed data on the HTML to stay current as well.
So how do I set that fetch script to re-fetch the txt contents every ~10 seconds?
Here is my current fetch script:
<script>
var url = 'NowPlaying.txt';
var storedText;
fetch(url)
.then(function(response) {
response.text().then(function(text) {
storedText = text;
currentSong();
});
});
function currentSong() {
document.getElementById('thesong').textContent = storedText;
}
</script>
For making my HTML I use "Dreamweaver 2019" on "Mac OS 11 Big Sur"
It's a single HTML and all files/assets the HTML accesses (The Audio, Background Images and the TXT file are in the same directory/network)
I hope that provides all necessary details.
Oh and what I tried already is to copy the line "var t = setTimeout(fetch, 100);" into the script, because this seems to be what keeps the clock javascript current and I hoped it would do the same with fetch
Also attached a screenshot of the HTML "live" in chrome >> screenshot
As you can see, the bottom is supposed to have the "Now Playing" information displayed (please ignore that in this example the text is cut to the right, the current information is too long, so it cuts off at the end)

You cam simply use setInterval to call your fetch every 10th seconds.
Just wrap your function in to a function and call that function in setInterval.
=> Also, at sometime if you would like to stop the fetch request on an event like a button click or something you can use clearInterval to stop the fetch request without refreshing the page.
Run snippet below to see the function is getting called every 10th seconds.
var url = 'NowPlaying.txt';
var storedText;
//10 Seconds fetch
function fetch10Seconds() {
//Fetch
fetch(url)
.then(function(response) {
response.text().then(function(text) {
storedText = text;
currentSong();
});
});
console.log('Fetch again in 10 Seconds')
}
//Call function every 10 seconds
setInterval(fetch10Seconds, 10000); // 10 seconds are defined in miliseconds
//Call this on fetch every 10 seconds
function currentSong() {
document.getElementById('thesong').textContent = storedText;
}

You can create a loop using setInterval function
var url = 'NowPlaying.txt';
var storedText;
setInterval(()=>{
fetch(url)
.then(function(response) {
response.text().then(function(text) {
storedText = text;
currentSong();
});
});
},10000) // in miliseconds
function currentSong() {
document.getElementById('thesong').textContent = storedText;
}

Try this:
function doFetch() {
setTimeout(() => {
fetch(url)
.then(response => {
response.text().then(text => {
storedText = text;
currentSong();
doFetch();
});
});
}, 10000);
}
doFetch();
This waits for the data to be fetched before waiting another 10 seconds and fetching again. Using setInterval will get the data every 10 seconds on the dot regardless of the success of the last run of the function.

Related

How fetch and wait 5 seconds and get source code of page

I have two websites, the first get web source code of the second (The two websites aren't one same host -> CORS). (The second website is not mine)
Example:
fetch("https://api.allorigins.win/get?url=" + url)
.then(response => {
if (response.ok) {
return response.json();
}
throw new Error('Network response was not ok.');
})
.then(data => {
var html = stringToHTML(data.contents);
});
It works, except that the concern is that the second page displays other elements several seconds after being displayed, so it does not display me because I retrieved the page too early.
How to make it wait a few seconds before recovering, while not forgetting that "api.allorigins.win"?
Do you have an idea? ( I use Vanillia JS )
Allorigins that must wait for rendering, but it does not.
Your alternative is to implement your own version of allorigins using an headless browser that waits the page render before returning it html. There's no ready solution for it
I don't know if you're using a framework or any library to handle the DOM, but with vanilla JS you can do something like this to check if your DOM element is ready
const DOMReadyCheck = setInterval(() => {
if (document.getElementsBy...) { //get your element here
//send your fetch request and set your element with data
clearInterval(DOMReadyCheck);
}

Why is the content not showing after refresh

I am creating a web app that allows users to search GIFs using the GIPHY API.
I added code that is supposed to refresh the page, then load all the GIFs.
// the Div where the GIFs from GIPHY will be appended to.
const imagesDiv = document.getElementById("imagesDiv");
// The user input -- the name of GIFs searched. Ex. Cats, Dogs, etc.
const search = document.getElementById("search");
// The search button for GIFs.
const submit = document.getElementById("submit");
// When pressed, it begins searching.
submit.addEventListener("click", function () {
// Refresh page first to get rid of old search results
window.location.reload();
getData(search.value);
});
// Code that uses GIPHY Api
function getData(query) {
// fetch data from GIPHY, using the user's input(ex. dogs) to replace word in the link
fetch(
"https://api.giphy.com/v1/gifs/search?q=" +
query +
"&api_key=8UHgk4rc0ictTp8kMXNGHbeJAWwg19yn&limit=5"
)
.then(function (response) {
return response.json();
})
.then(function (myJson) {
renderData(myJson);
});
function renderData(data) {
console.log(data.data);
// For loop runs as many times as needed to get all GIFs
for (let i = 0; i < data.data.length; i++) {
// create img element to represent the GIFs
const img = document.createElement("img");
// give className for css styling
img.className = "gifs";
// give img url to get the GIFs
img.src = data.data[i].images.original.url;
// put them into a div
imagesDiv.appendChild(img);
}
}
}
Instead it load then refreshes the page, removing all the GIFs before they can pop-up on screen
Your code end at "window.location.reload();".
Now it looks like no chance to execute "getData(search.value);".
Try this,
submit.addEventListener("click", function () {
imagesDiv.innerHTML = "";
getData(search.value);
});
When you reloading page, all of it content including scripts will be reloaded.
So you asked page to reload, and after that trying to load GIFs, this code will try to load GIFs, but reload already started.
The thing you see "it load then refreshes the page" - loaded from cache GIFs, that added to page almost instantly.
You might want to update your code in a way when you just removing elements with GIFs from DOM, and adding a new one.
Instead of
window.location.reload();
you can write:
while (imagesDiv.firstChild) {
imagesDiv.removeChild(imagesDiv.firstChild);
}

Changing a link dynamically in PhantomJS and clicking it to scrape the page

I've been trying to figure this out for a couple days now but haven't been able to achieve it.
There's this web page were I need to scrap all records available on it, I've noticed that if I modify the pagination link with firebug or the browser's inspector I can get all the records I need, for example, this is the original link:
<a href="javascript:gReport.navigate.paginate('paginator_min_row=16max_rows=15rows_fetched=15')">
If I modify that link like this
<a href="javascript:gReport.navigate.paginate('paginator_min_row=1max_rows=5000rows_fetched=5000')">
And then click on the pagination button on the browser (the very same that contains the link I've just changed) I'm able to get all records I need from that site (most of the time "rows" doesn't get any bigger than 4000, I use 5000 just in case)
Since I have to process that file by hand every single day I thought that maybe I could automatize the process with PhantomJS and get the whole page on a single run without looking for that link then changing it, so in order to modify the pagination link and getting all records I'm using the following code:
var page = require('webpage').create();
var fs = require('fs');
page.open('http://testingsite1.local', function () {
page.evaluate(function(){
$('a[href="javascript:gReport.navigate.paginate(\'paginator_min_row=16max_rows=15rows_fetched=15\')"]').first().attr('href', 'javascript:gReport.navigate.paginate(\'paginator_min_row=1max_rows=5000rows_fetched=5000\')').attr('id','clickit');
$('#clickit')[0].click();
});
page.render('test.png');
fs.write('test.html', page.content, 'w');
phantom.exit();
});
Notice that there are TWO pagination links on that website, because of that I'm using jquery's ".first()" to choose only the first one.
Also since the required link doesn't have any identificator I select it using its own link then change it to what I need, and lastly I add the "clickit" ID to it for later calling.
Now, this are my questions:
I'm, not exactly sure why it isn't working, if I run the code it fetches the first page only, after examining the requested page source code I do see the href link has been changed to what I want but it just doesn't get called, I have two different theories on what might be wrong
The modified href isn't getting "clicked" so the page isn't getting updated
The href does get clicked, but since the page takes a few seconds to load all results dynamically I only get to dump the first page Phantomjs gets to see
What do you guys think about it?
[UPDATE NOV 6 2015]
Ok, so the answers provided by #Artjomb and #pguardiario pointed me in a new direction:
I needed more debugging info on what was going on
I needed to call gReport.navigate.paginate function directly
Sadly I simply lack the the experience to properly use PhantomJS, several other samples indicated that I could achieve what I wanted with CasperJS, so I tried it, this is what I produced after a couple of hours
var utils = require('utils');
var fs = require('fs');
var url = 'http://testingsite1.local';
var casper = require('casper').create({
verbose: true,
logLevel: 'debug'
});
casper.on('error', function(msg, backtrace) {
this.echo("=========================");
this.echo("ERROR:");
this.echo(msg);
this.echo(backtrace);
this.echo("=========================");
});
casper.on("page.error", function(msg, backtrace) {
this.echo("=========================");
this.echo("PAGE.ERROR:");
this.echo(msg);
this.echo(backtrace);
this.echo("=========================");
});
casper.start(url, function() {
var url = this.evaluate(function() {
$('a[href="javascript:gReport.navigate.paginate(\'paginator_min_row=16max_rows=15rows_fetched=15\')"]').attr('href', 'javascript:gReport.navigate.paginate(\'paginator_min_row=1max_rows=5000rows_fetched=5000\')').attr('id', 'clicklink');
return gReport.navigate.paginate('paginator_min_row=1max_rows=5000rows_fetched=5000');
});
});
casper.then(function() {
this.waitForSelector('.nonexistant', function() {
// Nothing here
}, function() {
//page load failed after 5 seconds
this.capture('screen.png');
var html = this.getPageContent();
var f = fs.open('test.html', 'w');
f.write(html);
f.close();
}, 50000);
});
casper.run(function() {
this.exit();
});
Please be gentle as I know this code sucks, I'm no Javascript expert and in fact I know very little of it, I know I should have waited an element to appear but it simply didn't work on my tests as I was still getting the page without update from the AJAX request.
In the end I waited a long time (50 seconds) for the AJAX request to show on page and then dump the HTML
Oh! and calling the function directly did work great!
The href does get clicked, but since the page takes a few seconds to load all results dynamically I only get to dump the first page Phantomjs gets to see
It's easy to check whether it's that by wrapping the render, write and exit calls in setTimeout and trying different timeouts:
page.open('http://testingsite1.local', function () {
page.evaluate(function(){
$('a[href="javascript:gReport.navigate.paginate(\'paginator_min_row=16max_rows=15rows_fetched=15\')"]').first().attr('href', 'javascript:gReport.navigate.paginate(\'paginator_min_row=1max_rows=5000rows_fetched=5000\')').attr('id','clickit');
$('#clickit')[0].click();
});
setTimeout(function(){
page.render('test.png');
fs.write('test.html', page.content, 'w');
phantom.exit();
}, 5000);
});
If it's really just a timeout issue, then you should use the waitFor() function to wait for a specific condition like "all elements loaded" or "x elements of that type are loaded".
The modified href isn't getting "clicked" so the page isn't getting updated
This is a little trickier. You can listen to the onConsoleMessage, onError, onResourceError, onResourceTimeout events (Example) and see if there are errors on the page. Some of those errors are fixable by the stuff you can do in PhantomJS: Function.prototype.bind not available or HTTPS site/resources cannot be loaded.
There are other ways to click something that are more reliable such as this one.

JW Player do action based on elapsed time

I have a video and series of comments on the video. Each comment has a specific time associated with the video. For example video 1 has 3 comments. one in second 15, another in second 30, and the last comment is in second 45.
I can show all the comments. However, instead I want each comment to be shown on its associated time of the video. For example, comment 1 should only be appeared in second 15 and lasts until second 30 where it is replaced by second comment and so on.
I'm using JW Player to play the video and can get getPosition() function to get the current elapsed time of the video. It should be a simple JavaScript code to achieve this but unfortunately I'm so beginner with JS.
My own idea to achieve this is to use onTime function and for each position check if there is a comment in the server or not, and retrieve if there is. As in:
<script>
jwplayer("container1").onTime(
function(event) {
{
var comment = get_comment_of(event.position);
setText(comment);
function setText(text) {
document.getElementById("message").innerHTML = text;
}
});
</script>
However, this is an expensive function and will send huge number of requests to the server.
Ok, the best solution I have preloaded the comments in a javascript array when the page is loaded, then show them when they occur using onTime function. This way there is no need to send a new request on on every onTime frame. As the comments are already available on the client side
var timeArray = <?php echo json_encode($timeArray) ?>;
var commentArray = <?php echo json_encode($commentArray) ?>;
jwplayer("container1").onTime(
function(event) {
for (var i = 0; i < timeArray.length; i++)
{
var time = timeArray[i]
var comment = commentArray[i];
if (event.position >= time) {
setText(comment);
}
}
function setText(text) {
document.getElementById("message").innerHTML = text; }
}
);

KDE plasmoid ind autorefresh

I'm trying to write KDE4 plasmoid in JavaScript, but have not success.
So, I need to get some data via HTTP and display it in Label. That's working well, but I need regular refresh (once in 10 seconds), it's not working.
My code:
inLabel = new Label();
var timer= new QTimer();
var job=0;
var fileContent="";
function onData(job, data){
if(data.length > 0){
var content = new String(data.valueOf());
fileContent += content;
}
}
function onFinished(job) {
inLabel.text=fileContent;
}
plasmoid.sizeChanged=function()
{
plasmoid.update();
}
timer.timeout.connect(getData);
timer.singleShot=false;
getData();
timer.start(10000);
function getData()
{
fileContent="";
job = plasmoid.getUrl("http://192.168.0.10/script.cgi");
job.data.connect(onData);
job.finished.connect(onFinished);
plasmoid.update();
}
It gets script once and does not refresh it after 10 seconds. Where is my mistake?
It is working just fine in here at least (running a recent build from git master), getData() is being called as expected. Can you see any errors in the console?
EDIT: The problem was that getUrl() explicitly sets NoReload for KIO::get() which causes it load data from cache instead of forcing a reload from the server. Solution was to add a query parameter to the URL in order to make it force reload it.

Categories