How do SPAs handle Not Modified 304 response - javascript

Let’s say a Single Page Application (SPA) written in angular or vuejs for example loads 3 components on a page and each component calls a different backend api.
Then, a user refreshes the page. The same 3 calls are made but this time, the backend returns 304 for each of them.
The SPA components are just js code. So with each call to the backend they expect data returned. When a 304 is returned. There’s no data. However things function fine...
There’s some magic I don’t understand. Can someone help me understand this?

The server will only return a 304 response with no content if the client includes an If-Modified-Since or If-None-Match request header. Without those headers the server will always respond with the full content. There's now two possibilities:
The Javascript code itself adds those headers because it still has the data somewhere, to explicitly allow empty responses. The Javascript code may have stored the data in local storage.
The browser includes those headers because it still has an implicit cache of the data, and if the server returns 304, it transparently returns the cached data to the Javascript request. I'm not sure whether browsers actually do this; I believe not, but it's a possibility.
It's more likely the Javascript is managing a local cache, perhaps with a web worker.

Related

How to determine, in a service worker, which HTML5 page orginated the fetch request?

Is there a mechanism in a service worker, which I am unaware, which can allow me to know from which page a fetch request is fired from?
Example:
I have an HTML page uploaded to my website at /aPageUploadedByUser/onMySite/index.html, I do want to make sure none of the authorization headers gets passed to any fetch request made by /aPageUploadedByUser/onMySite/index.html to any source.
If my service worker does somehow know the originating page of this request, I can modulate them for safety.
To answer your question (but with a caveat that this is not a good idea!), there are two general approaches, each with their own drawbacks:
The Referer: request header is traditionally set to the URL of the web page that made a request. However, this header may not always be set for all types of requests.
A FetchEvent has a clientId property, and that value can be passed to clients.get() to obtain a reference to the Client that created the request. The Client, in turn, has a url property. The caveats here are that clients.get() is asynchronous (meaning you can't use its result to determine whether or not to call event.respondWith()), and that the URL of the Client may have changed in the interval between the request being made and when you read the url property.
With those approaches outlined, the bigger problem is that using a service worker in this way is not a safe practice. The first time a user visits your origin, they won't have a service worker installed. And then on follow-up visits, if a user "hard" reloads the page with the shift key, the page won't be controlled by a service worker either. You can't rely on a service worker's fetch event handler to implement any sort of critical validation.

Best way to submit/retrieve data from external server, when you have access to external server but not the same domain server

I'm working on an SDK type thing for submitting data (including file uploads) to a service that I run.
I'm doing research, and trying to figure out the best way to submit data and get a response to an external server (my server) without being blocked by XSS restrictions.
The current setup is as so:
The customer hosts a server, and uses my server side library.
They generate a client page that loads the required JS from my server.
The client page requests data from my server (if it was not passed from the SDK on page load), and displays the information to the user.
The user then triggers an event, which submits data (potentially including file uploads) to my server (not the local server with the SDK library).
My server responds success or fail and the client JS handles it appropriately.
Some notes:
My server is a private PHP server that I have complete control over.
Although I could route all data through the customer's server (as they are using my library), it is not ideal, as it requires more set up for the customer, is slower for the end user, and handling file uploads is problematic as I want those files on my server, not theirs.
I thought perhaps the file upload inputs could be in an iframe. Will this allow uploads direct to my server?
Since the customer is using my library with an API key, I can authenticate the client's requests by passing an authentication token to the front end on page load that then gets passed to my server with whatever communication method ends up working.
I am open to changes in the architecture, but this is the ideal set up for me. I am just not sure what frontend methods are best for implementing this.
JSONP would work if you only need to make GET requests, but it sounds like you need to do POSTs as well since you mention file uploads.
For your case, Cross-Origin Resource Sharing (CORS) might work. The short explanation is that a browser will send an extra header named Origin if you make a request with XMLHttpRequest to another domain. Your server needs to respond with an additional header named Access-Control-Allow-Origin with a value of * or the value the browser sent in the Origin header. There are some nuances and gotchas when using CORS, so I recommend reading the link above for a thorough explanation.
With CORS set up, you should be able to use XMLHttpRequest to upload files.

Why is expressjs sending Set-Cookie header with every OPTIONS response?

I've observed that my express server sends a Set-Cookie header in response to every OPTIONS request made by my front-end Angular.js project.
The cookie has a new value on each of the OPTIONS requests, but is never actually stored by
Chrome.
When I use passport.js to authenticate a user, the response to the POST request has the same header (again with a new cookie), but this time I can see that it is stored, and sent with subsequent requests in the Cookie header.
Which express module is doing this, and why? And why does Chrome not store the cookie?
This is more curiosity than anything, as it's not causing any problems (just caused a lot of confusion when trying to track one down).
The method OPTIONS are not supposed to have a side-effect. See this HTTP 1.1 documentation
OPTIONS is a request for information to the server. Such request is not considered as real interaction between a user and server. The server likely makes the information available to all users.
The browser respects this and chooses to ignore the cookies, conforming to the specification. That said it is security risk passing cookie data to user openly. Even if it is not valid, it can reveal server-side internals, which can allow hackers to exploit it.
pretty sure this is a bug with the current session module. if you're using the new cookies session, then you won't hit this problem. feel free to file a bug: https://github.com/expressjs/session

Jquery: $.getJSON with different url port

I am trying to use $.getJSON with a local app calling another local app on a different port.
For example my app is running on localhost:3000, but I want to make a $.getJSON call to another app running on localhost:3001, in firebug it returns red with a 200 response, but with no data in the response. Is there a way to do this? I tried this....
$.getJSON('http://localhost:3001/dashboard/widgets/marketing_efficiency_gauge.json',
{ key: 'value' }, function(data){
alert(data)
});
Edit: for clarity there are two rails apps involved one on localhost:3000 another on localhost:3001
Second edit: here is the json response for localhost:3001 when I hit it with a browser (say firefox) https://gist.github.com/willfults/7665299
The Same Origin Policy prevents JavaScript scripts from making HTTP requests to different domains. For the purposes of SOP, a URL with the same hostname but different ports (as is the case here) is still considered to be a different domain, and hence requests are not permitted.
What typically happens in such cases is that the browser actually does make the request over the network, but drops the response and sends an error result to the JavaScript.
To fix this, you'll need to implement Cross-Origin Resource Sharing on the localhost:3001 service. In a nutshell, this entails adding a Access-Control-Allow-Origin header to responses listing the domains which are permitted to make cross-domain requests to the service. That is, in this case adding a Access-Control-Allow-Origin: localhost:3000 header to the response from the localhost:3001 service should allow things to work as you expect.
Incidentally, this is why the browser makes the request but drops the result: it needs to request the headers from the server in order to determine whether the JavaScript is allowed to make the request or not (i.e. it needs to check if there's a Access-Control-Allow-Origin header in the response). Why a HEAD request isn't sufficient, I don't know.
The other alternative is to use JSONP. This is potentially simpler to implement on the server side, but has the disadvantages of only working for GET requests, and requiring slightly trickier coding on the client side.

IIS7 ASP.NET MVC Static JavaScript File Cache?

I have a really simple site that I created. I am trying to test JS caching in the browser but it doesn't seem to be working. I thought that most major browsers cached your JS file by default as long as the file name doesn't change. I have the site running in IIS 7 locally.
For my test I have a simple JS file that is doing a document write on body load. If I make a change to the JS file (change the text the document write is writing), then save the file, I see that updated when refreshing the browser. Why is this? Shouldn't I see the original output as long as the JS file name hasn't changed?
Here is the simple site I created to test.
When you refresh your browser, the browser sends a request to the server for all the resources required to display the page. If the browser has a cached version of any of the required resources, it may send an If-Modified-Since header in the request for that resource. When a server receives this header, rather than just serving up the resource, it compares the modified time of the resource to the time submitted in the If-Modified-Since header. If the resource has changed, the server will send back the resource as usual with a 200 status. But, if the resource has not changed, the server will reply with a status 304 (Not Modified), and the browser will use its cached version.
In your case, the modified date has changed, so the browser sends the new version.
The best way to test caching in your browser would probably be to use fiddler and monitor requests and responses while you navigate your site. Avoid using the refresh button in your testing as that frequently causes the browser to request fresh copies of all resources (ie, omitting the If-Modified-Since header).
Edit: The above may be an over-simplification of what's going on. Surely a web search will yield plenty of in-depth articles that can provide a deeper understanding of how browser caching works in each browser.

Categories