Load External Script and Style Files in a SPA - javascript

I have a type of SPA which consumes an API in order to fetch data. There are some instance of this SPA and all of them use common style and script files. So my problem is when I change a single line in those files, I will have to open each and every instances and update the files. It's really time consuming for me.
One of the approaches is to put those files in a folder in the server, then change the version based on the time, but I will lose browser cache if I use this solution:
<link href="myserver.co/static/main.css?ver=1892471298" rel="stylesheet" />
<script src="myserver.co/static/script.js?ver=1892471298"></script>
The ver value is produced based on time and I cannot use browser cache. I need a solution to update these files from the API, then all of the SPAs will be updated.

In your head tag, you can add the code below:
<script type="text/javascript">
var xmlhttp = new XMLHttpRequest();
var url = "http://localhost:4000/getLatestVersion"; //api path to get the latest version
xmlhttp.onreadystatechange = function() {
if (xmlhttp.readyState == 4 && xmlhttp.status == 200) {
var tags = JSON.parse(xmlhttp.responseText);
for (var i = 0; i < tags.length; i++) {
var tag = document.createElement(tags[i].tag);
if (tags[i].tag === 'link') {
tag.rel = tags[i].rel;
tag.href = tags[i].url;
} else {
tag.src = tags[i].url;
}
document.head.appendChild(tag);
}
}
};
xmlhttp.open("POST", url, false);
xmlhttp.setRequestHeader("Content-type", "application/x-www-form-urlencoded");
xmlhttp.send();
</script>
Your api path should allow "CORS" from your website that handles the code above.
And your api should return a json data like below:
var latestVersion = '1892471298'; //this can be stored in the database
var jsonData = [
{
tag: 'link',
rel: 'stylesheet',
url: 'http://myserver.co/static/main.css?ver=' + latestVersion
},
{
tag: 'script',
rel: '',
url: 'http://myserver.co/static/script.js?ver=' + latestVersion
}
];
//return jsonData to the client here

If you change anything in your JS or CSS then you have to update the browser cache, all you can do is to update that particular JS version not all of them, it should reflect in browser.

How about adding a method in your API returning the files' last modified time and then inserting the value into the "src"/"href" attribute after the "ver="

Related

How to ping ip address from java script [duplicate]

I'm making a web app that requires that I check to see if remote servers are online or not. When I run it from the command line, my page load goes up to a full 60s (for 8 entries, it will scale linearly with more).
I decided to go the route of pinging on the user's end. This way, I can load the page and just have them wait for the "server is online" data while browsing my content.
If anyone has the answer to the above question, or if they know a solution to keep my page loads fast, I'd definitely appreciate it.
I have found someone that accomplishes this with a very clever usage of the native Image object.
From their source, this is the main function (it has dependences on other parts of the source but you get the idea).
function Pinger_ping(ip, callback) {
if(!this.inUse) {
this.inUse = true;
this.callback = callback
this.ip = ip;
var _that = this;
this.img = new Image();
this.img.onload = function() {_that.good();};
this.img.onerror = function() {_that.good();};
this.start = new Date().getTime();
this.img.src = "http://" + ip;
this.timer = setTimeout(function() { _that.bad();}, 1500);
}
}
This works on all types of servers that I've tested (web servers, ftp servers, and game servers). It also works with ports. If anyone encounters a use case that fails, please post in the comments and I will update my answer.
Update: Previous link has been removed. If anyone finds or implements the above, please comment and I'll add it into the answer.
Update 2: #trante was nice enough to provide a jsFiddle.
http://jsfiddle.net/GSSCD/203/
Update 3: #Jonathon created a GitHub repo with the implementation.
https://github.com/jdfreder/pingjs
Update 4: It looks as if this implementation is no longer reliable. People are also reporting that Chrome no longer supports it all, throwing a net::ERR_NAME_NOT_RESOLVED error. If someone can verify an alternate solution I will put that as the accepted answer.
Ping is ICMP, but if there is any open TCP port on the remote server it could be achieved like this:
function ping(host, port, pong) {
var started = new Date().getTime();
var http = new XMLHttpRequest();
http.open("GET", "http://" + host + ":" + port, /*async*/true);
http.onreadystatechange = function() {
if (http.readyState == 4) {
var ended = new Date().getTime();
var milliseconds = ended - started;
if (pong != null) {
pong(milliseconds);
}
}
};
try {
http.send(null);
} catch(exception) {
// this is expected
}
}
you can try this:
put ping.html on the server with or without any content, on the javascript do same as below:
<script>
function ping(){
$.ajax({
url: 'ping.html',
success: function(result){
alert('reply');
},
error: function(result){
alert('timeout/error');
}
});
}
</script>
You can't directly "ping" in javascript.
There may be a few other ways:
Ajax
Using a java applet with isReachable
Writing a serverside script which pings and using AJAX to communicate to your serversidescript
You might also be able to ping in flash (actionscript)
You can't do regular ping in browser Javascript, but you can find out if remote server is alive by for example loading an image from the remote server. If loading fails -> server down.
You can even calculate the loading time by using onload-event. Here's an example how to use onload event.
Pitching in with a websocket solution...
function ping(ip, isUp, isDown) {
var ws = new WebSocket("ws://" + ip);
ws.onerror = function(e){
isUp();
ws = null;
};
setTimeout(function() {
if(ws != null) {
ws.close();
ws = null;
isDown();
}
},2000);
}
Update: this solution does not work anymore on major browsers, since the onerror callback is executed even if the host is a non-existent IP address.
To keep your requests fast, cache the server side results of the ping and update the ping file or database every couple of minutes(or however accurate you want it to be). You can use cron to run a shell command with your 8 pings and write the output into a file, the webserver will include this file into your view.
The problem with standard pings is they're ICMP, which a lot of places don't let through for security and traffic reasons. That might explain the failure.
Ruby prior to 1.9 had a TCP-based ping.rb, which will run with Ruby 1.9+. All you have to do is copy it from the 1.8.7 installation to somewhere else. I just confirmed that it would run by pinging my home router.
There are many crazy answers here and especially about CORS -
You could do an http HEAD request (like GET but without payload).
See https://ochronus.com/http-head-request-good-uses/
It does NOT need a preflight check, the confusion is because of an old version of the specification, see
Why does a cross-origin HEAD request need a preflight check?
So you could use the answer above which is using the jQuery library (didn't say it) but with
type: 'HEAD'
--->
<script>
function ping(){
$.ajax({
url: 'ping.html',
type: 'HEAD',
success: function(result){
alert('reply');
},
error: function(result){
alert('timeout/error');
}
});
}
</script>
Off course you can also use vanilla js or dojo or whatever ...
If what you are trying to see is whether the server "exists", you can use the following:
function isValidURL(url) {
var encodedURL = encodeURIComponent(url);
var isValid = false;
$.ajax({
url: "http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20html%20where%20url%3D%22" + encodedURL + "%22&format=json",
type: "get",
async: false,
dataType: "json",
success: function(data) {
isValid = data.query.results != null;
},
error: function(){
isValid = false;
}
});
return isValid;
}
This will return a true/false indication whether the server exists.
If you want response time, a slight modification will do:
function ping(url) {
var encodedURL = encodeURIComponent(url);
var startDate = new Date();
var endDate = null;
$.ajax({
url: "http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20html%20where%20url%3D%22" + encodedURL + "%22&format=json",
type: "get",
async: false,
dataType: "json",
success: function(data) {
if (data.query.results != null) {
endDate = new Date();
} else {
endDate = null;
}
},
error: function(){
endDate = null;
}
});
if (endDate == null) {
throw "Not responsive...";
}
return endDate.getTime() - startDate.getTime();
}
The usage is then trivial:
var isValid = isValidURL("http://example.com");
alert(isValid ? "Valid URL!!!" : "Damn...");
Or:
var responseInMillis = ping("example.com");
alert(responseInMillis);
const ping = (url, timeout = 6000) => {
return new Promise((resolve, reject) => {
const urlRule = new RegExp('(https?|ftp|file)://[-A-Za-z0-9+&##/%?=~_|!:,.;]+[-A-Za-z0-9+&##/%=~_|]');
if (!urlRule.test(url)) reject('invalid url');
try {
fetch(url)
.then(() => resolve(true))
.catch(() => resolve(false));
setTimeout(() => {
resolve(false);
}, timeout);
} catch (e) {
reject(e);
}
});
};
use like this:
ping('https://stackoverflow.com/')
.then(res=>console.log(res))
.catch(e=>console.log(e))
I don't know what version of Ruby you're running, but have you tried implementing ping for ruby instead of javascript? http://raa.ruby-lang.org/project/net-ping/
let webSite = 'https://google.com/'
https.get(webSite, function (res) {
// If you get here, you have a response.
// If you want, you can check the status code here to verify that it's `200` or some other `2xx`.
console.log(webSite + ' ' + res.statusCode)
}).on('error', function(e) {
// Here, an error occurred. Check `e` for the error.
console.log(e.code)
});;
if you run this with node it would console log 200 as long as google is not down.
You can run the DOS ping.exe command from javaScript using the folowing:
function ping(ip)
{
var input = "";
var WshShell = new ActiveXObject("WScript.Shell");
var oExec = WshShell.Exec("c:/windows/system32/ping.exe " + ip);
while (!oExec.StdOut.AtEndOfStream)
{
input += oExec.StdOut.ReadLine() + "<br />";
}
return input;
}
Is this what was asked for, or am i missing something?
just replace
file_get_contents
with
$ip = $_SERVER['xxx.xxx.xxx.xxx'];
exec("ping -n 4 $ip 2>&1", $output, $retval);
if ($retval != 0) {
echo "no!";
}
else{
echo "yes!";
}
It might be a lot easier than all that. If you want your page to load then check on the availability or content of some foreign page to trigger other web page activity, you could do it using only javascript and php like this.
yourpage.php
<?php
if (isset($_GET['urlget'])){
if ($_GET['urlget']!=''){
$foreignpage= file_get_contents('http://www.foreignpage.html');
// you could also use curl for more fancy internet queries or if http wrappers aren't active in your php.ini
// parse $foreignpage for data that indicates your page should proceed
echo $foreignpage; // or a portion of it as you parsed
exit(); // this is very important otherwise you'll get the contents of your own page returned back to you on each call
}
}
?>
<html>
mypage html content
...
<script>
var stopmelater= setInterval("getforeignurl('?urlget=doesntmatter')", 2000);
function getforeignurl(url){
var handle= browserspec();
handle.open('GET', url, false);
handle.send();
var returnedPageContents= handle.responseText;
// parse page contents for what your looking and trigger javascript events accordingly.
// use handle.open('GET', url, true) to allow javascript to continue executing. must provide a callback function to accept the page contents with handle.onreadystatechange()
}
function browserspec(){
if (window.XMLHttpRequest){
return new XMLHttpRequest();
}else{
return new ActiveXObject("Microsoft.XMLHTTP");
}
}
</script>
That should do it.
The triggered javascript should include clearInterval(stopmelater)
Let me know if that works for you
Jerry
You could try using PHP in your web page...something like this:
<html><body>
<form method="post" name="pingform" action="<?php echo $_SERVER['PHP_SELF']; ?>">
<h1>Host to ping:</h1>
<input type="text" name="tgt_host" value='<?php echo $_POST['tgt_host']; ?>'><br>
<input type="submit" name="submit" value="Submit" >
</form></body>
</html>
<?php
$tgt_host = $_POST['tgt_host'];
$output = shell_exec('ping -c 10 '. $tgt_host.');
echo "<html><body style=\"background-color:#0080c0\">
<script type=\"text/javascript\" language=\"javascript\">alert(\"Ping Results: " . $output . ".\");</script>
</body></html>";
?>
This is not tested so it may have typos etc...but I am confident it would work. Could be improved too...

Tinymce Image upload - save to database

I have configured my TinyMCE to use images_upload_url and images_upload_handler to post to a selected image to a server-side page which saves the image to a location on my server. In addition, this server-side page also saves the filename of the image as a record within a database.
I then have another server-side page which reads the database and constructs a JSON list of the images that have been uploaded. This JSON data is then pulled into my Tinymce instance using image_list, so that I can easily reuse previously uploaded images as opposed to having to reupload the same image more than once.
The specific lines of my tiny.init() are:
image_list: 'processes/image-list.php',
image_class_list: [
{title: 'None', value: ''},
{title: 'Full width image', value: 'img-responsive'}
],
images_upload_url: 'processes/upload-image.php',
images_upload_handler: function (blobInfo, success, failure) {
var xhr, formData;
xhr = new XMLHttpRequest();
xhr.withCredentials = false;
xhr.open('POST', 'processes/upload-image-free.asp');
xhr.onload = function() {
var json;
if (xhr.status != 200) {
failure('HTTP Error: ' + xhr.status);
return;
}
json = JSON.parse(xhr.responseText);
if (!json || typeof json.location != 'string') {
failure('Invalid JSON: ' + xhr.responseText);
return;
}
success(json.location);
};
formData = new FormData();
formData.append('file', blobInfo.blob(), blobInfo.filename());
xhr.send(formData);
},
image_dimensions: false,
All of this works as expected.
What I would like to do is also save a description of the image to the database so this can be outputted as the title within the JSON data of previously uploaded images.
As the upload feature only allows an image to be selected from a file system I cannot utilise the upload feature:
So I thought I could utilise the alternate description field of the image feature/modal but this would have to be done via a JavaScript triggered event that is triggered upon submitting the image feature/modal, that takes the content in the alternative description input field and POST this to a serverside page that can update the database.
Unless there is another way does anybody know how I can target the 'click' on the 'save' button within the image feature to extract the alternate description before the image feature/modal disappears and extract the input field content?
From there I should be able to work out how to get this to a server-side page to update the database.
Many thanks in advance
I have managed to resolve this so posting a solution to help others - though this is more than a hack.
Firstly on my form page after the tiny.init is loaded I am using the following:
document.addEventListener('keyup', logKey);
function logKey(e) {
labels = document.querySelectorAll(".tox-label");
for (i = 0; i < labels.length; ++i) {
if (labels[i].textContent == "Alternative description"){
imageDescription = document.getElementById(labels[i].htmlFor).value;
}
}
};
This loops through all the elements (labels in this case) which have a class of .toxlabel and if the textContent matches "Alternative description" then to capture the value in in a variable called 'imageDescription'.
Then within my tiny.init I have the following:
editor.on('ExecCommand', function(e) {
if (e.command == "mceUpdateImage"){
var http = new XMLHttpRequest();
var params = encodeURI('desc=' + imageDescription);
http.open('POST', 'processes/upload-image-description.asp', true);
http.setRequestHeader('Content-type', 'application/x-www-form-urlencoded');
http.onreadystatechange = function() {//Call a function when the state changes.
if(http.readyState == 4 && http.status == 200) {
console.log(http.responseText);
}
}
http.send(params);
}
});
This code is actioned upon the mceUpdateImage modal closing, it takes the value stored within the imageDescription variable and posts it to a server-side page which updates the database.
I am sure there are cleaner ways but they would require more of a TinyMce understanding.

Download files by script or something

In a web site, not mine, there a result to a search
<a href="show?file=191719&token=r1j">
<a href="show?file=191720&token=gh5">
<a href="show?file=191721&token=98j">
.....
<a href="show?file=191733&token=ty0">
and after I click on one of them I go to a page i fill a form and after I go to download page and i click on the link:
<a href="download?file=191719&token=r1j">
And i have to do that manually for 150 file wich is too long !!
what i want is by using a script or something, i download all the files directly by getting the file id in result page and put it in download link.
use this javascript snippet, where http://www.that-website.com/ is the url of that website, AND DO NOT download all files all at once if there are too many, download couple dozens each time by specifying start and finish file number, Note that the browser popup blocker will block this so you need to allow popup from this webpage in your popup blocker in your browser
JS:
var fileNumber,
start = 191719,
finish = 191729;
for(fileNumber = start; fileNumber <= finish; ++fileNumber){
window.open("http://www.that-website.com/download?file=" + fileNumber);
}
UPDATE:
Since random token are implemented in the url the easiest way is to enter it manually in multi-lines of window.open(), something like this:
window.open("http://www.that-website.com/download?file=191719&token=r1j");
window.open("http://www.that-website.com/download?file=191720&token=gh5");
window.open("http://www.that-website.com/download?file=191721&token=98j");
and so on for couple dozens.
UPDATE 2:
See an example of this in this JSFiddle
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Document</title>
</head>
<body>
<!-- COPY BUNCH OF THE URLs AND PASTE THEM IN HERE THEN RELOAD THE PAGE, THEN REPEAT OVER AND OVER UNTIL IT IS ALL DONE! -->
<script src="https://code.jquery.com/jquery-1.11.3.min.js"></script>
<script>
$(document).ready(function(){
$('a').each(function(){
var showLink = $(this).attr('href');
var downloadLink = showLink.replace("show?file", "download?file");
window.open("http://www.example.com/" + downloadLink);
});
});
</script>
</body>
</html>
With the above code, this an HTML page ON YOUR COMPUTER, copy several original from that website page links - like: TEST to your local page and run it, still it is highly recommended that you paste 10-30 links each time.
You can generate links using excel, save it as txt file and download using wget with -i parameter.
You could use an XMLHttpRequest to download files in parallel as blobs and then use <a download>s to initiate download behaviour. This will have same-origin-policy restrictions though.
General idea is
// fetch
var xhr = new XMLHttpRequest();
xhr.addEventListener('load', function () {
var uri = URL.createObjectURL(this.response); // generate URI to access Blob
// write, see below
});
xhr.open('GET', target_file_href);
xhr.responseType = 'blob'; // state we want the target as a blob/file
xhr.send(); // send the request
// ---------------
// write
var a = document.createElement('a');
a.href = uri;
a.setAttribute('download'); // make this a download link rather than a change page
document.body.appendChild(a);
a.click();
// cleanup a, uri
Here is a parallel file downloader I wrote in ES5 which limits the number of concurrent downloads.
function ParallelDownloader(max_parallel, retry_on_error) {
this.links = [];
this.current = 0;
this.max_parallel = max_parallel || 5;
this.retry_on_error = !!retry_on_error;
}
ParallelDownloader.prototype = Object.create(null);
ParallelDownloader.prototype.add = function (url) {
if ('splice' in url && 'length' in url)
this.links.push.apply(this.links, url);
else
this.links.push(url);
this.downloadNext();
};
ParallelDownloader.prototype.downloadNext = (function () {
function load() {
var a = document.createElement('a'),
uri = URL.createObjectURL(this.response),
cd = this.getResponseHeader('Content-Disposition'),
filename = null;
if (cd) {
cd = cd.match(/;\s+filename=(.+)/);
if (cd) filename = cd[1];
}
if (null === filename) {
cd = this.__url.match(/\/([^/]+?(?=\?|$))/);
if (cd) filename = cd[1];
}
if (null !== filename) a.setAttribute('download', filename);
else a.setAttribute('download');
a.setAttribute('href', uri);
document.body.appendChild(a);
a.click();
document.body.removeChild(a);
URL.revokeObjectURL(uri);
--this.__parallelDownloader.current;
this.__parallelDownloader.downloadNext();
}
function error() {
--this.__parallelDownloader.current;
if (this.__parallelDownloader.retry_on_error) {
console.warn('Will retry', this.__url);
this.__parallelDownloader.unshift(this.__url);
}
this.__parallelDownloader.downloadNext();
}
return function () {
var url;
++this.current;
if (this.current > this.max_parallel || this.links.length === 0) {
--this.current;
return;
}
url = this.links.shift();
var xhr = new XMLHttpRequest();
xhr.__parallelDownloader = this;
xhr.__url = url;
xhr.addEventListener('load', load);
xhr.addEventListener('error', error);
xhr.open('GET', url);
xhr.responseType = 'blob';
xhr.send();
this.downloadNext();
};
}());
To use it you would do, e.g.
var pd = new ParallelDownloader(10); // max 10 concurrent downloads
pd.add([
'/path1.txt', '/path2.pub', '/path3.pdf'
]);
// or
pd.add('/path4.txt');
pd.add('/path5.txt');
// etc
Download attempt initiates as soon as a link is added and there is a slot free. (If you enable retry_on_error I haven't limited it so you may get infinite loops)

Insert external page html into a page html

I'd like to load/insert an external html page into my web page. Example :
<b>Hello this is my webpage</b>
You can see here an interresting information :
XXXX
Hope you enjoyed
the XXXX should be replaced by a small script (the smaller as possible) that load a page like http://www.mySite.com/myPageToInsert.html
I found the following code with jquery :
<script>$("#testLoad").load("http://www.mySite.com/myPageToInsert.html");</script>
<div id="testLoad"></div>
I would like the same without using an external javascript library as jquery...
There are 2 solutions for this (2 that I know at least):
Iframe -> this one is not so recommended
Send an ajax request to the desired page.
Here is a small script:
<script type="text/javascript">
function createRequestObject() {
var obj;
var browser = navigator.appName;
if (browser == "Microsoft Internet Explorer") {
obj = new ActiveXObject("Microsoft.XMLHTTP");
} else {
obj = new XMLHttpRequest();
}
return obj;
}
function sendReq(req) {
var http = createRequestObject();
http.open('get', req);
http.onreadystatechange = handleResponse;
http.send(null);
}
function handleResponse() {
if (http.readyState == 4) {
var response = http.responseText;
document.getElementById('setADivWithAnIDWhereYouWantIt').innerHTML=response;
}
}
sendReq('yourpage');
//previously </script> was not visible
</script>
Would an iframe fit the bill?
<b>Hello this is my webpage</b>
You can see here an interresting information :
<iframe id="extFrame" src="http://www.mySite.com/myPageToInsert.html"></iframe>
Hope you enjoyed
You can set the src attribute of your iframe element using plain old javascript to switch out the page for another
I think what you are looking for are in the Jquery source code.
you can see more details here $(document).ready equivalent without jQuery

Refresh image with Javascript, but only if changed on server

I want to reload an image on a page if it has been updated on the server. In other questions it has been suggested to do something like
newImage.src = "http://localhost/image.jpg?" + new Date().getTime();
to force the image to be re-loaded, but that means that it will get downloaded again even if it really hasn't changed.
Is there any Javascript code that will cause a new request for the same image to be generated with a proper If-Modified-Since header so the image will only be downloaded if it has actually changed?
UPDATE: I'm still confused: if I just request the typical URL, I'll get the locally cached copy. (unless I make the server mark it as not cacheable, but I don't want to do that because the whole idea is to not re-download it unless it really changes.) if I change the URL, I'll always re-download, because the point of the new URL is to break the cache. So how do I get the in-between behavior I want, i.e. download the file only if it doesn't match the locally cached copy?
Javascript can't listen for an event on the server. Instead, you could employ some form of long-polling, or sequential calls to the server to see if the image has been changed.
You should have a look at the xhr.setRequestHeader() method. It's a method of any XMLHttpRequest object, and can be used to set headers on your Ajax queries. In jQuery, you can easily add a beforeSend property to your ajax object and set up some headers there.
That being said, caching with Ajax can be tricky. You might want to have a look at this thread on Google Groups, as there's a few issues involved with trying to override a browser's caching mechanisms. You'll need to ensure that your server is returning the proper cache control headers in order to be able to get something like this to work.
One way of doing this is to user server-sent events to have the server push a notification whenever the image has been changed. For this you need a server-side script that will periodically check for the image having been notified. The server-side script below ensures that the server sends an event at least once every (approximately) 60 seconds to prevent timeouts and the client-side HTML handles navigation away from and to the page:
sse.py
#!/usr/bin/env python3
import time
import os.path
print("Content-Type: text/event-stream\n\n", end="")
IMG_PATH = 'image.jpg'
modified_time = os.path.getmtime(IMG_PATH)
seconds_since_last_send = 0
while True:
time.sleep(1)
new_modified_time = os.path.getmtime(IMG_PATH)
if new_modified_time != modified_time:
modified_time = new_modified_time
print('data: changed\n\n', end="", flush=True)
seconds_since_last_send = 0
else:
seconds_since_last_send += 1
if seconds_since_last_send == 60:
print('data: keep-alive\n\n', end="", flush=True)
seconds_since_last_send = 0
And then your HTML would include some JavaScript code:
sse.html
<html>
<head>
<meta charset="UTF-8">
<title>Server-sent events demo</title>
</head>
<body>
<img id="img" src="image.jpg">
<script>
const img = document.getElementById('img');
let evtSource = null;
function setup_sse()
{
console.log('Creating new EventSource.');
evtSource = new EventSource('sse.py');
evtSource.onopen = function() {
console.log('Connection to server opened.');
};
// if we navigate away from this page:
window.onbeforeunload = function() {
console.log('Closing connection.');
evtSource.close();
evtSource = null;
};
evtSource.onmessage = function(e) {
if (e.data == 'changed')
img.src = 'image.jpg?version=' + new Date().getTime();
};
evtSource.onerror = function(err) {
console.error("EventSource failed:", err);
};
}
window.onload = function() {
// if we navigate back to this page:
window.onfocus = function() {
if (!evtSource)
setup_sse();
};
setup_sse(); // first time
};
</script>
</body>
</html>
Here am loading an image, tree.png, as binary data dynamically with AJAX and saving the Last-Modified header. Periodically (every 5 second in the code below). I issue another download request sending backup a If-Modified-Since header using the saved last-modified header. I check to see if data has been returned and re-create the image with the data if present:
<!doctype html>
<html>
<head>
<title>Test</title>
<script>
window.onload = function() {
let image = document.getElementById('img');
var lastModified = ''; // 'Sat, 11 Jun 2022 19:15:43 GMT'
function _arrayBufferToBase64(buffer) {
var binary = '';
var bytes = new Uint8Array(buffer);
var len = bytes.byteLength;
for (var i = 0; i < len; i++) {
binary += String.fromCharCode(bytes[i]);
}
return window.btoa( binary );
}
function loadImage()
{
var request = new XMLHttpRequest();
request.open("GET", "tree.png", true);
if (lastModified !== '')
request.setRequestHeader("If-Modified-Since", lastModified);
request.responseType = 'arraybuffer';
request.onload = function(/* oEvent */) {
lastModified = request.getResponseHeader('Last-Modified');
var response = request.response;
if (typeof response !== 'undefined' && response.byteLength !== 0) {
var encoded = _arrayBufferToBase64(response);
image.src = 'data:image/png;base64,' + encoded;
}
window.setTimeout(loadImage, 5000);
};
request.send();
}
loadImage();
};
</script>
</head>
<body>
<img id="img">
</body>
</html>
You can write a server side method which just returns last modified date of the image resource,
Then you just use polling to check for the modified date and then reload if modified date is greater than previous modified date.
pseudo code (ASP.NET)
//server side ajax method
[WebMethod]
public static string GetModifiedDate(string resource)
{
string path = HttpContext.Current.Server.MapPath("~" + resource);
FileInfo f = new FileInfo(path);
return f.LastWriteTimeUtc.ToString("yyyy-dd-MMTHH:mm:ss", CultureInfo.InvariantCulture);//2020-05-12T23:50:21
}
var pollingInterval = 5000;
function getPathFromUrl(url) {
return url.split(/[?#]/)[0];
}
function CheckIfChanged() {
$(".img").each(function (i, e) {
var $e = $(e);
var jqxhr = $.ajax({
type: "POST",
contentType: "application/json; charset=utf-8",
url: "/Default.aspx/GetModifiedDate",
data: "{'resource':'" + getPathFromUrl($e.attr("src")) + "'}"
}).done(function (data, textStatus, jqXHR) {
var dt = jqXHR.responseJSON.d;
var dtCurrent = $e.attr("data-lastwrite");
if (dtCurrent) {
var curDate = new Date(dtCurrent);
var dtLastWrite = new Date(dt);
//refresh if modified date is higher than current date
if (dtLastWrite > curDate) {
$e.attr("src", getPathFromUrl($e.attr("src")) + "?d=" + new Date());//fool browser with date querystring to reload image
}
}
$e.attr("data-lastwrite", dt);
});
}).promise().done(function () {
window.setTimeout(CheckIfChanged, pollingInterval);
});
}
$(document).ready(function () {
window.setTimeout(CheckIfChanged, pollingInterval);
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<img class="img" src="/img/rick.png" alt="rick" />
If you are going to check whether files has changed on the server you have to make http request from the server for the file time, because there is no other way for your check the file time once page get loaded to the browser.
So that time check script will like
filetimecheck.php
<?php
echo filemtime(string $filename);
?>
Then you can check the file time using your Javascript. BTW I have put jQuery $.get for check the file time.
dusplayimage.php
<img id="badge" src="image.jpg"> />
<script>
var image_time = <?php echo filemtime(string $filename); ?>;
var timerdelay = 5000;
function imageloadFunction(){
$.get("filetimecheck.php", function(data, status){
console.log("Data: " + data + "\nStatus: " + status);
if(image_time < parseInt(data)) {
document.getElementById('yourimage').src = "image.jpg?random="+new Date().getTime();
}
});
setTimeout(imageloadFunction, timerdelay);
}
imageloadFunction();
</script>
You will be using extra call to the server to check the file time which you can't avoid however you can use the time delay to fine-tune the polling time.
Yes, you can customize this behavior. Even with virtually no change to your client code.
So, you will need a ServiceWorker (caniuse 96.59%).
ServiceWorker can proxy your http requests. Also, ServiceWorker has already built-in storage for the cache. If you have not worked with ServiceWorker, then you need to study it in detail.
The idea is the following:
When requesting a picture (in fact, any file), check the cache.
If there is no such picture in the cache, send a request and fill the cache storage with the date of the request and the file.
If the cache contains the required file, then send only the date and path of the file to the special API to the server.
The API returns either the file and modification date at once (if the file was updated), or the response that the file has not changed {"changed": false}.
Then, based on the response, the worker either writes a new file to the cache and resolves the request with the new file, or resolves the request with the old file from the cache.
Here is an example code (not working, but for understanding)
s-worker.js
self.addEventListener('fetch', (event) => {
if (event.request.method !== 'GET') return;
event.respondWith(
(async function () {
const cache = await caches.open('dynamic-v1');
const cachedResponse = await cache.match(event.request);
if (cachedResponse) {
// check if a file on the server has changed
const isChanged = await fetch('...');
if (isChanged) {
// give file, and in the background write to the cache
} else {
// return data
}
return cachedResponse;
} else {
// request data, send from the worker and write to the cache in the background
}
})()
);
});
In any case, look for "ways to cache statics using ServiceWorker" and change the examples for yourself.
WARNING this solution is like taking a hammer to crush a fly
You can use sockets.io to pull information to browser.
In this case you need to monitor image file changes on the server side, and then if change occur emit an event to indicate the file change.
On client (browser) side listen to the event and then then refresh image each time you get the event.
set your image source in a data-src property,
and use javascript to periodicaly set it to the src attribute of that image with a anchor (#) the anchor tag in the url isn't send to the server.
Your webserver (apache / nginx) should respond with a HTTP 304 if the image wasn't changed, or a 200 OK with the new image in the body, if it was
setInterval(function(){
l= document.getElementById('logo');
l.src = l.dataset.src+'#'+ new Date().getTime();
},1000);
<img id="logo" alt="awesome-logo" data-src="https://upload.wikimedia.org/wikipedia/commons/1/11/Test-Logo.svg" />
EDIT
Crhome ignores http cache-control headers, for subsequent image reloads.
but the fetch api woks as expected
fetch('https://upload.wikimedia.org/wikipedia/commons/1/11/Test-Logo.svg', { cache: "no-cache" }).then(console.log);
the no-cache instructs the browser to always revalidate with the server, and if the server responds with 304, use the local cached version.

Categories