I'm trying to figure out a way to cache my knockoutJS SPA data and I've been experimenting with amplifyJS. Here's one of my GET functions:
UserController.prototype.getUsers = function() {
var self = this;
return $.ajax({
type: 'GET',
url: self.Config.api + 'users'
}).done(function(data) {
self.usersArr(ko.utils.arrayMap(data.users, function(item) {
// run each item through model
return new self.Model.User(item);
}));
}).fail(function(data) {
// failed
});
};
Here's the same function, "amplified":
UserController.prototype.getUsers = function() {
var self = this;
if (amplify.store('users')) {
self.usersArr(ko.utils.arrayMap(amplify.store('users'), function(item) {
// run each item through model
return new self.Model.User(item);
}));
} else {
return $.ajax({
type: 'GET',
url: self.Config.api + 'users'
}).done(function(data) {
self.usersArr(ko.utils.arrayMap(data.users, function(item) {
// run each item through model
return new self.Model.User(item);
}));
}).fail(function(data) {
// failed
});
};
This works as expected, but I'm not sure about the approach I used, because it will also require extra work on the addUser, removeUser and editUser functions. And seeing as I have many more similar functions throughout my app, I'd like to avoid the extra code if possible.
I've found a way of handling things with the help of ko.extenders, like so:
this.usersArr = ko.observableArray().extend({ localStore: 'users' });
Then use the ko.extenders.localStore function to update the local storage data whenever it detects a change inside the observableArray. So on init it will write to the observableArray in case local storage data exists for users key and on changes it will update the local storage data.
My problem with this approach is that I need to run my data through the model and I couldn't find a way to do that from the localStore function, which is kept on a separate page.
Has any of you worked with KO and Amplify? What approach did you use? Should I use the first one or try a combination of the two and rewrite the extender in a way that it only updates the local storage without writing to the observableArray on init?
Following the discussion in the question's comments, I suggested to use native HTTP caching instead of adding another caching layer on the client by means of an extra library.
This would require implementing a conditional request scheme.
Such a scheme relies on freshness information in the Ajax response headers via the Last-Modified (or E-Tag) HTTP headers and other headers that influence browser caching (like Cache-Control: with its various options).
The browser transparently sends an If-Modified-Since (or If-None-Match) header to the server when the same resource (URL) is requested subsequently.
The server can respond with HTTP 304 Not Modified if the client's information is still up-to-date. This can be a lot faster than re-creating a full response from scratch.
From the Ajax request's point of view (jQuery or otherwise) a response works the same way, no matter if it actually came from the server or if it came from the browser's cache, the latter is only a lot faster.
Carefully adapting the server side is necessary for this, the client side on the other hand does not need much change.
The benefit of implementing conditional requests is reduced load on the server and faster response behavior on the client.
A specialty of Knockout to improve this even further:
If you happen to use the mapping plugin to map raw server data to a complex view model, you can define - as part of the options that control the mapping process - a key function. Its purpose is to match parts of your view model against parts of the source data.
This way parts of the data that already have been mapped will not be mapped again, the others are updated. That can help reduce the client's processing time for data it already has and, potentially, unnecessary screen updates as well.
Related
I am using Vue.js and Choices.js javascript plugin and I have to dynamically populate values of two select fields via ajax.
What I am trying achieve is initate a get request at page load and populate the universities select, and after a value in universities select is chosen start a new getrequest to populate the faculties select.
What is happening is that when I pick the university for the first time, everything will work normally. For example if I pick a university option with value="1" an ajax get request will be sent to /faculties?university_id=1.The console log will print onChange startedso we are sure the method is running correctly; the appropriate v-model="selectedUniversity"is updating too.
If I now change the value of the select field again, the ajax function won't be called anymore and no additional requests will be done to the server. The console.logwill still run, and the v-modelis still being updated. Does anyone understand what is going on here?
var Choices = require('choices.js');
module.exports = {
data: function() {
return {
selectedUniversity: '',
selectedFaculty: '',
universities: {},
faculties: {}
}
},
mounted: function () {
var self = this;
var universitySelect = new Choices(document.getElementById('university'));
universitySelect.ajax(function(callback) {
fetch('/universities')
.then(function(response) {
response.json().then(function(data) {
callback(data, 'id', 'name');
self.universities = data;
});
})
.catch(function(error) {
console.log(error);
});
});
},
methods: {
onChange: function () {
console.log("onChange started");
var self = this;
var url = '/faculties?university_id=' + self.selectedUniversity;
var facultySelect = new Choices(document.getElementById('faculty'));
//This part below only runs the first time when the university select is selected
facultySelect.ajax(function(callback) {
fetch(url)
.then(function(response) {
response.json().then(function(data) {
callback(data, 'id', 'name');
self.faculties = data;
});
})
.catch(function(error) {
console.log(error);
});
});
}
}
}
The Headers are set like this:
I think your request URL /faculties?university_id=1 is cached and that's why it worked on first time and second time, the response is coming from the cached response.
In your fetch API, set cache mode to ignore the cached response,
fetch(url, {cache: "no-store"}).then(....)
For complete list of cache modes for fetch() API,
https://hacks.mozilla.org/2016/03/referrer-and-cache-control-apis-for-fetch/
In case if above link is unavailable,
Fetch cache control APIs
The idea behind this API is specifying a caching policy for fetch to explicitly indicate how and when the browser HTTP cache should be consulted. It’s important to have a good understanding of the HTTP caching semantics in order to use these most effectively. There are many good articles on the web such as this one that describe these semantics in detail. There are currently five different policies that you can choose from.
“default” means use the default behavior of browsers when downloading resources. The browser first looks inside the HTTP cache to see if there is a matching request. If there is, and it is fresh, it will be returned from fetch(). If it exists but is stale, a conditional request is made to the remote server and if the server indicates that the response has not changed, it will be read from the HTTP cache. Otherwise it will be downloaded from the network, and the HTTP cache will be updated with the new response.
“no-store” means bypass the HTTP cache completely. This will make the browser not look into the HTTP cache on the way to the network, and never store the resulting response in the HTTP cache. Using this cache mode, fetch() will behave as if no HTTP cache exists.
“reload” means bypass the HTTP cache on the way to the network, but update it with the newly downloaded response. This will cause the browser to never look inside the HTTP cache on the way to the network, but update the HTTP cache with the downloaded response. Future requests can use that updated response if appropriate.
“no-cache” means always validate a response that is in the HTTP cache even if the browser thinks that it’s fresh. This will cause the browser to look for a matching request in the HTTP cache on the way to the network. If such a request is found, the browser always creates a conditional request to validate it even if it thinks that the response should be fresh. If a matching cached entry is not found, a normal request will be made. After a response has been downloaded, the HTTP cache will always be updated with that response.
“force-cache” means that the browser will always use a cached response if a matching entry is found in the cache, ignoring the validity of the response. Thus even if a really old version of the response is found in the cache, it will always be used without validation. If a matching entry is not found in the cache, the browser will make a normal request, and will update the HTTP cache with the downloaded response.
Let’s look at a few examples of how you can use these cache modes.
// Download a resource with cache busting, to bypass the cache
// completely.
fetch("some.json", {cache: "no-store"})
.then(function(response) { /* consume the response */ });
// Download a resource with cache busting, but update the HTTP
// cache with the downloaded resource.
fetch("some.json", {cache: "reload"})
.then(function(response) { /* consume the response */ });
// Download a resource with cache busting when dealing with a
// properly configured server that will send the correct ETag
// and Date headers and properly handle If-Modified-Since and
// If-None-Match request headers, therefore we can rely on the
// validation to guarantee a fresh response.
fetch("some.json", {cache: "no-cache"})
.then(function(response) { /* consume the response */ });
// Download a resource with economics in mind! Prefer a cached
// albeit stale response to conserve as much bandwidth as possible.
fetch("some.json", {cache: "force-cache"})
.then(function(response) { /* consume the response */ });
Coming from a .net world where synchronicity is a given I can query my data from a back end source such as a database, lucene, or even another API, I'm having a trouble finding a good sample of this for node.js where async is the norm.
The issue I'm having is that a client is making an API call to my hapi server, and from there I need to take in the parameters and form an Elasticsearch query to call, using the request library, and then wait for the instance to return before populating my view and sending it back to the client, problem being is that the request library uses a callback once the data is returned, and the empty view has long been returned to the client by then.
Attempting to place the return within the call back doesn't work since the EOF for the javascript was already hit and null returned in it's place, what is the best way to retrieve data within a service call?
EX:
var request = require('request');
var options = {
url: 'localhost:9200',
path: {params},
body: {
{params}
}
}
request.get(options, function(error, response){
// do data manipulation and set view data
}
// generate the view and return the view to be sent back to client
Wrap request call in your hapi handler by nesting callbacks so that the async tasks execute in the correct logic order. Pseudo hapi handler code is as following
function (request, reply) {
Elasticsearch.query((err, results) => {
if (err) {
return reply('Error occurred getting info from Elasticsearch')
}
//data is available for view
});
}
As I said earlier in your last question, use hapi's pre handlers to help you do async tasks before replying to your client. See docs here for more info. Also use wreck instead of request it is more robust and simpler to use
I'm putting together an app using WP-API with an Angular frontend. I'm developing locally, with the data I'm trying to use loaded in from a remote server. Doing a $resource request for all posts works great.
But, I'm trying now to get the result of the X-WP-TotalPages header and can't figure out how to do it. Here's the code as it stands:
var request = function() {
var fullRoute = 'http://dylangattey.com/wp-json/posts/';
var defaultGet = {
method: 'GET',
transformResponse: function(data, headers){
response = {};
response.posts = JSON.parse(data);
response.headers = headers();
console.log(headers['X-WP-TotalPages']);
return response;
}
};
return $resource(fullRoute, {}, {
get: defaultGet,
query: defaultGet
});
};
// This gives me back the first 10 posts
request().query();
Chrome and curl both show that X-WP-TotalPages should be equal to 2 as a header. However, it just logs undefined.
Am I missing something? No matter whether I use $http or $resource I get the same result. I also have the same issue whether I use a remote site or a local WP installation on localhost. Really, I just want to know the total number of pages or even just the total number of posts for a given request, so if there's a better way to do it, I'd love to know.
You probably need to control the access to the specific headers you want on the server side.
See this thread for a brief discussion, or MDN on Access-Control-Expose-Headers.
Currently I have a problem displaying 'chunks' of responses that I am sending from my Web Service Node.js server (localhost:3000) to a simulated client running on a Node.js server (localhost:3001).
edit * - Current implementation just uses Angular's %http as the transport without web-sockets
The logic goes as follows:
1 . Create an array on the client side of 'Cities' and POST them (from the AngularJS controller) to the Web Service located at: localhost:3000/getMatrix
$http({
method: 'POST',
url: 'http://localhost:3000/getMatrix',
data: cityArray
}).
success(function (data,status,headers,config){
// binding of $scope variables
// calling a local MongoDB to store the each data item received
for(var key in data){
$http.post('/saveRoutes', data[key])
.success(function (data, status){
// Data stored
})
.error(function (data, status){
// error prints in console
});
}
}).
error(function (data,status,headers,config){
alert("Something went wrong!!");
});
2 . The Web Service then runs through its process to make a matrix of 'Cities' (eg. If it was passed 5 cities, it would return a JSON matrix of 5by5 [25 items]). But the catch is that it passes back the data in 'chunks' thanks to Node's > response.write( data )
Side note - Node.js automatically sets 'Transfer-Encoding':'chunked' in the header
* Other code before (routing/variable creation/etc.) *
res.set({
'Content-Type':'application/json; charset=utf-8',
});
res.write("[\n");
* Other code to process loops and pass arguments *
// query.findOne to MongoDB and if there are no errors
res.write(JSON.stringify(docs)+",\n");
* insert more code to run loops to write more chunks *
// at the end of all loops
res.end("]");
// Final JSON looks like such
[
{ *data* : *data* },
{ *data* : *data* },
......
{ *data* : *data* }
]
Currently the problem is not that the 'chunked' response is not reaching its destination, but that I do not know of a way to start processing the data as soon as the chunks come in.
This is a problem since I am trying to do a matrix of 250x250 and waiting for the full response overloads Angular's ability to display the results as it tries to do it all at once (thus blowing up the page).
This is also a problem since I am trying to save the response to MongoDB and it can only handle a certain size of data before it is 'too large' for MongoDB to process.
I have tried looking into Angular's $q and the promise/defer API, but I am a bit confused on how to implement it and have not found a way to start processing data chunks as they come in.
This question on SO about dealing with chunks did not seem to help much either.
Any help or tips on trying to display chunked data as it comes back to AngularJS would be greatly appreciated.
If the responses could be informative code snippets demonstrating the technique, I would greatly appreciate it since seeing an example helps me learn more than a 'text' description.
-- Thanks
No example because I am not sure what you are using in terms of transport code/if you have a websocket available:
$http does not support doing any of the callbacks until a success code is passed back through at the end of the request - it listens for the .onreadystatechange with a 200 -like value.
If you're wanting to do a stream like this you either need to use $http and wrap it in a transport layer that makes multiple $http calls that all end and return a success header.
You could also use websockets, and instead of calling $http, emit an event in the socket.
Then, to get the chunks back the the client, have the server emit each chunk as a new event on the backend, and have the front-end listen for that event and do the processing for each one.
My app is Backbone.js for client-side, Express.js for back-end.
I have problems with syncing with all parts of my API, using the backbone model and collection(they use urlRoot: "/users").
I'm allowed to use only GET or POST, no PUT or DELETE.
*I'm not allowed to use more models*
Not allowed to use jQuery ajax
My API
add new user:
I need to make a POST to /users with JSON of new data. So I did it just fine with - this.model.save({new data...})
list all users:
My API for that, responses to GET /users, with the right handler - so, this.collection.fetch() - works fine.
Log-In:
My API accepts POST to /users/login for that. How can I add a function "logIn" to my model, that will use custom sync/pass options.url to sync - or any other way - that will POST to /users/login ?
Log-Out:
API accepts POST to /users/logout - how to send this request using my backbone model ?
User By ID:
Same question here for GET /users/:id
Update user:
POST /users/:id - same question again.
--- So actually, the question in short is ---
What is the best way (or the most "right"), to implement methods of a backbone model, that are similar to "model.save()" - but just POST/GET to a bit different path then urlRoot ?
You probably have a couple options here. One would be structuring your models in a way that supports the urls you want. For instance, have a User model and a Session model that deal with updating the user and managing the logged in state separately.
The other thing you should probably do is to use the url method in your models.
Something like this in your User model. (Note: using urlRoot instead of url here is identical, but this is the correct approach for anything more complicated that is needed in the url)
url : function() {
var base = "/users/";
if(this.isNew()) {
return base;
} else {
return base + this.get("id");
}
}
You could extend this same concept to work in your Session model for handling logout vs login based on if the Session is new or not.
Update:
If you must use the same model, the best thing would be to totally bypass the Backbone.sync method and write a custom AJAX call with success/error handlers that know how to clean things up.
login : function() {
$.post("/users/login", {success: function (response) {
// Update user as needed
}, error: function (xhr, response) {
// Handle errors
}
}
}