In an web pure vanilla JavaScript app that does not use service workers, I would like to explicitly cache a JavaScript file that is sitting on an AWS S3 file server. The following script would be sitting in the index.html file for the application (I’ve modified the URL as it's a client project):
<script>
caches.match('https://s3.amazonaws.com/com.myproject/myjavascript.js')
.then(function(response) {
if (response) {
return response;
} else {
fetch('https://s3.amazonaws.com/com.myproject/myjavascript.js')
.then(function(res) {
return caches.open('mycache')
.then(function(cache) {
cache.put('https://s3.amazonaws.com/com.myproject/myjavascript.js',res.clone());
console.log(res.clone());
return res;
});
});
}
});
</script>
I believe this code should do the following: Check if the myjavascript.js file is in the cache. If it is, return the JavaScript file which would then be executed by the browser. If myjavascriptfile.js is not found in the cache, it will be fetched and placed in the subcache ‘mycache’ and finally returned to the browser where it would be executed.
After running this, I find the URL for the file in the cache with a response of “Ok”, but the code is not executed by the browser and I don’t see the file contents in sources within the Chrome browser developer tools.
Why would this not be working? What is wrong with my thinking on this.
Many thanks,
Fred
fetch by itself will not execute JavaScript. It simply makes a request for the specified content and make it available for the code to access. If you really want to continue with this approach it is possible to take the text and eval it.
const url = 'https://unpkg.com/underscore#1.8.3/underscore-min.js';
caches.match(url)
.then(function(response) {
if (response) {
return response;
} else {
return fetch(url)
.then(function(res) {
return caches.open('mycache')
.then(function(cache) {
cache.put(url,res.clone());
console.log(res.clone());
return res;
});
});
}
})
.then(function(response) {
console.log(response);
response.text().then(function(text) {
eval(text);
console.log(_);
});
});
Note: Why is using the JavaScript eval function a bad idea?
The code sample you have is a pattern commonly found in Service Workers. The reason it works in that context is the initial request is from <script> tags and not direction invocation of fetch. Because of the <script> tag the browser handles automatically executing the returned content.
<script src="https://unpkg.com/underscore#1.8.3/underscore-min.js"></script>
Related
when i debug this code in to chrome console then its not show any output or alert! please help me to complete this code! i need to get read my read.txt file text in to console.log....
the code was i try one is shows below.
function loadText() {
fetch('C:\Windows\Temp\read.txt')
.then(function(response){
return response.text();
})
.then(function(data){
console.log(data);
alert(data)
})
.catch(function(error){
console.log(error);
alert(data)
})
}
Try below code and indicate your directory like below
async function fetchText() {
let response = await fetch('../demo.txt');
console.log(response.status); // 200
console.log(response.statusText); // OK
if (response.status === 200) {
let data = await response.text();
console.log(data);
// handle data
}
}
fetchText();
This seems like a duplicate of this issue - AJAX request to local file system not working in Chrome?
The problems are the same, however you are using the fetch API (https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API) not an XMLHttp request (https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest)
When i run loadText i get the following error:-
Fetch API cannot load . URL scheme must be "http" or "https" for CORS request.
You cannot make requests to the filesystem in Chrome. However you can disable Chrome security using flags (--allow-file-access-from-files)
see - Allow Google Chrome to use XMLHttpRequest to load a URL from a local file however this is not advised.
You will also need to update your path in the fetch function by prefixing with file:/// this tells it to look in the file system, and changed the protocol from http or https.
I want to load some template files dynamically with the help of ajax. I have added the ajax $.get method for loading the html files and it's working fine with all browsers except safari browser.
In safari it gives me "Failed to load resource: cancelled" error when first time I open the url. However after I refresh my page again, it loads all the files.
When I open my url with http request instead of https, it can load the template file in first time on safari browser.
This issue only happens when I open the url with https. I have successfully installed the certificate and its working fine with other browser. Even there is no certificate issue in safari as well.
Here is my code
var decorator = {
init: function(book, cd) {
this.loadTPL(cd);
},
tpl: {
btnStart: "tpl/startBtn.html",
interfaceTpl: "tpl/interfaceTpl.html",
topMenu: "tpl/topMenu.html",
topMenuItem: "tpl/topMenuItem.html",
},
loadTPL: function(cbTpl) {
var self = this;
var objTpl = {};
async.forEachOf(this.tpl, function(value, key, callback) {
$.get(value, {}, function(data) {
//alert("Load was performed.");
//console.log(value, data);
objTpl[key] = data;
callback();
});
}, function(err, results) {
if (err) {
console.log(err);
}
self.tpl = objTpl;
cbTpl(err);
});
}
}
Any Idea?
While your approach "should" work, it goes into the weird unknown areas of JS, specially using the async lib. So, my solution basically involves refactoring all of it. Instead async you can use jQuery promises to fire all the gets you need, and then handle the responses/errors in each one of them with the promises handlers.
As an example:
$(templatesToLoad).each(function (element, index) {
$.ajax({element.url, cache: false })
.done(function (result) {
objTpl[key] = result;
element.allback(); // callback for each template
})
.fail(function () {
alert( "error" );
})
.always(function () {
alert( "completed" );
});
});
Note:$.get its just a sugar code for $.ajax. By default $.ajax performs a get, unless another method is specified.
The browser, whichever it is, will handle the calls and it will trigger each one of them as soon as permitted, based on each browser capabilities and limitations, so no need to worry about specific implementations.
As general rule, always remember to check the encoding of the calls and responses and their formats, json, text or whatever you use as a response format.
This is likely a cache/timeout issue. Try setting the ajax timeout to something huge. If that works, back it off until you find the sweet spot.
I'm trying to cache a single page webapp with a service worker. It should get all it's files from the cache and update that cache only when a new service worker-version is published.
With a precache function i'm writing some files to the cache as follows:
function precache() {
return caches.open(CACHE).then(function(cache) {
return cache.addAll([
'index.html',
'js/script.js',
'img/bg.png',
'img/logo.svg',
...
]);
});
}
(I've tried to cache with and without "/" before the paths, and even with absolute paths. Makes no difference)
In Chrome's Cache Storage, the content of all those files is exactly as it should be. But when I try to serve the files from cache on reload of the page, none of the requests match with the cache, they all get rejected, even when I'm still online.
self.addEventListener('fetch', function(evt) {
evt.respondWith(
caches.match(evt.request).then(function(response) {
if(response){
return response;
} else {
reject('no result');
}
}).catch(function(){
if(evt.request.url == 'https://myurl.com'){
return caches.match('/index.html');
}
});
)
});
The index.html from the catch-function gets served correctly, and in turn requests the other files, like /js/script.js. Those request show up like this in the console:
Request { method: 'GET', url: 'https://myurl.com/js/script.js', ... referrer: 'https://myurl.com' }
But they do not return a response, only a notice like this shows:
The FetchEvent for "https://myurl.com/js/script.js" resulted in a network error response: an object that was not a Response was passed to respondWith().
Am I missing something here?
Thanks to the link from Rajit https://developer.mozilla.org/en-US/docs/Web/API/Cache/match I've found that the caches.match() function accepts an options-object.
I've updated that line in my service worker to
caches.match(evt.request,{cacheName:CACHE,ignoreVary:true}).then(function(response) {
so it includes the cache-name and an ignores VARY header matching, and now it returns the correct files.
I had the same problem and it seems to have been solved by using the ignoreVary:true parameter. The documentation explicitly states that the cacheName parameter is ignored by Cache.match()
Important note is to add all possible url versions, with, without trailing slash, because even when autocompleted, it seems to bee seen as two different things. So, for example, if you had a pwa in domain/folder/,
calling domain/folder/ online and caching wont make domain/folder work offline (in some cases) unless you previously accessed the later online as well.
Solution:
when adding via caches.addAll or similar, add both
'/folder/'
AND
'/folder'.
What never did a thing for me on the other hand, was ignoreVary .
How can we get the source code of a webpage from a webpage in php and/or javascript?
In Javascript without using unnecessary frameworks (in the example api.codetabs.com is a proxy to bypass Cross-Origin Resource Sharing):
fetch('https://api.codetabs.com/v1/proxy?quest=google.com').then((response) => response.text()).then((text) => console.log(text));
Thanks to:
#PLB
#Shadow Wizard
Getting the source code of an iframe
http://www.frihost.com/forums/vt-32602.html
#Matt Coughlin.
First, you must know that you will never be able to get the source code of a page that is not on the same domain as your page in javascript. (See http://en.wikipedia.org/wiki/Same_origin_policy).
In PHP, this is how you do it :
file_get_contents($theUrl);
In javascript, there is three ways :
Firstly, by XMLHttpRequest : http://jsfiddle.net/635YY/1/
var url="../635YY",xmlhttp;//Remember, same domain
if("XMLHttpRequest" in window)xmlhttp=new XMLHttpRequest();
if("ActiveXObject" in window)xmlhttp=new ActiveXObject("Msxml2.XMLHTTP");
xmlhttp.open('GET',url,true);
xmlhttp.onreadystatechange=function()
{
if(xmlhttp.readyState==4)alert(xmlhttp.responseText);
};
xmlhttp.send(null);
Secondly, by iFrames : http://jsfiddle.net/XYjuX/1/
var url="../XYjuX";//Remember, same domain
var iframe=document.createElement("iframe");
iframe.onload=function()
{
alert(iframe.contentWindow.document.body.innerHTML);
}
iframe.src=url;
iframe.style.display="none";
document.body.appendChild(iframe);
Thirdly, by jQuery : http://jsfiddle.net/edggD/2/
$.get('../edggD',function(data)//Remember, same domain
{
alert(data);
});
Following Google's guide on fetch() and using the D.Snap answer, you would have something like this:
fetch('https://api.codetabs.com/v1/proxy?quest=URL_you_want_to_fetch')
.then(
function(response) {
if (response.status !== 200) {
console.log('Looks like there was a problem. Status Code: ' +
response.status);
return;
}
// Examine the text in the response
response.text().then(function(data) {
// data contains all the plain html of the url you previously set,
// you can use it as you want, it is typeof string
console.log(data)
});
}
)
.catch(function(err) {
console.log('Fetch Error :-S', err);
});
This way you are using a CORS Proxy, in this example it is Codetabs CORS Proxy.
A CORS Proxy allows you to fetch resources that are not in your same domain, thus avoiding the Same-Origin policies blocking your requests.
You can take a look at other CORS Proxys:
https://nordicapis.com/10-free-to-use-cors-proxies/
Ajax example using jQuery:
// Display the source code of a web page in a pre tag (escaping the HTML).
// Only works if the page is on the same domain.
$.get('page.html', function(data) {
$('pre').text(data);
});
If you just want access to the source code, the data parameter in the above code contains the raw HTML source code.
I have a text file in the root of my web app http://localhost/foo.txt and I'd like to load it into a variable in javascript.. in groovy I would do this:
def fileContents = 'http://localhost/foo.txt'.toURL().text;
println fileContents;
How can I get a similar result in javascript?
XMLHttpRequest, i.e. AJAX, without the XML.
The precise manner you do this is dependent on what JavaScript framework you're using, but if we disregard interoperability issues, your code will look something like:
var client = new XMLHttpRequest();
client.open('GET', '/foo.txt');
client.onreadystatechange = function() {
alert(client.responseText);
}
client.send();
Normally speaking, though, XMLHttpRequest isn't available on all platforms, so some fudgery is done. Once again, your best bet is to use an AJAX framework like jQuery.
One extra consideration: this will only work as long as foo.txt is on the same domain. If it's on a different domain, same-origin security policies will prevent you from reading the result.
here is how I did it in jquery:
jQuery.get('http://localhost/foo.txt', function(data) {
alert(data);
});
Update 2019: Using Fetch:
fetch('http://localhost/foo.txt')
.then(response => response.text())
.then((data) => {
console.log(data)
})
https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API
If you only want a constant string from the text file, you could include it as JavaScript:
// This becomes the content of your foo.txt file
let text = `
My test text goes here!
`;
<script src="foo.txt"></script>
<script>
console.log(text);
</script>
The string loaded from the file becomes accessible to JavaScript after being loaded. The `(backtick) character begins and ends a template literal, allowing for both " and ' characters in your text block.
This approach works well when you're attempting to load a file locally, as Chrome will not allow AJAX on URLs with the file:// scheme.
Update 2020: Using Fetch with async/await
const response = await fetch('http://localhost/foo.txt');
const data = await response.text();
console.log(data);
Note that await can only be used in an async function. A longer example might be
async function loadFileAndPrintToConsole(url) {
try {
const response = await fetch(url);
const data = await response.text();
console.log(data);
} catch (err) {
console.error(err);
}
}
loadFileAndPrintToConsole('https://threejsfundamentals.org/LICENSE');
This should work in almost all browsers:
var xhr=new XMLHttpRequest();
xhr.open("GET","https://12Me21.github.io/test.txt");
xhr.onload=function(){
console.log(xhr.responseText);
}
xhr.send();
Additionally, there's the new Fetch API:
fetch("https://12Me21.github.io/test.txt")
.then( response => response.text() )
.then( text => console.log(text) )
One thing to keep in mind is that Javascript runs on the client, and not on the server. You can't really "load a file" from the server in Javascript. What happens is that Javascript sends a request to the server, and the server sends back the contents of the requested file. How does Javascript receive the contents? That's what the callback function is for. In Edward's case, that is
client.onreadystatechange = function() {
and in danb's case, it is
function(data) {
This function is called whenever the data happen to arrive. The jQuery version implicitly uses Ajax, it just makes the coding easier by encapsulating that code in the library.
When working with jQuery, instead of using jQuery.get, e.g.
jQuery.get("foo.txt", undefined, function(data) {
alert(data);
}, "html").done(function() {
alert("second success");
}).fail(function(jqXHR, textStatus) {
alert(textStatus);
}).always(function() {
alert("finished");
});
you could use .load which gives you a much more condensed form:
$("#myelement").load("foo.txt");
.load gives you also the option to load partial pages which can come in handy, see api.jquery.com/load/.