I am working on a js app which loads some xml files stored in another server.
I am using ofcourse XHR to load and access their content.
As a first level I just acces the top level file which contains the name of the elements and I show them in a thumbnail format. My idea is to show the title of the files and the number of items that every element contain. Also and in the second level when the user clicks on one of the elements(thumbnails) I load the content of the clicked element.
So I am wandering if my process is performant or not since I load or access the files two times the first when I show the number of items an element contain and in a second time when the user clicks on the element and iI shows its content.
Please what do you think? Is there another good solution?
Thanks a lot for your answers :)
Making two HTTP requests where you could make one is usually going to be less than ideal. It's certainly going to take a bit longer, although the "bit" may be very small indeed if the browser can satisfy the second request from cache without even a revalidation query to the source server.
It's up to you what to do. If you keep a copy of the response after using it to create the thumbnail, you're using more memory; if you don't and make a second request to the server, you're doing something that could be slightly slower.
Takes your money, makes your choice.
Related
I do not want to turn off caching for a site, but I do want to avoid caching in some areas. Wondering the best way.
First, I pull data from an API to check the "online" status of an advisor.
Next, I store that (and any other data that may change) in a CPT.
At the same time, I store a random string of characters (that I can sort on later giving the appearance of random order).
I pull the data from the API on every page load, because I need real-time data. This makes me cringe, but I don't know any other way. This part isn't cached.
However, when I display the list of "advisors", I sort them by online status, then the random string. This is meant to give fairness as to who is above the fold, and near the beginning of the results.
Well, that is all generated with PHP, so therefore the resultant HTML is cached.
I read a bit about the WP Rest API, and perhaps that will help with the speed of the query, but that won't help with the cached HTML right?
So, regardless of how I query the data (REST API, WP_Query), am I to assume that I must iterate through the data with JavaScript to avoid it being cached by the Full Page Cache solution of the server?
If I use WP_Query still, and I use PHP to display the results, can I just call the PHP function from JavaScript?
Every page of the site will display some or all of the advisors (ex: homepage 8 advisors, the "advisor" page shows all, the "advisor" category pages, and 4 advisors in the footer of every other page), so it doesn't make sense to turn off caching.
Any direction would be greatly appreciated! Thanks in advance.
Are you not better of doing it with AJAX?
Or, I'm pretty sure there's like a line of PHP that you can add to pages so they won't get cached. Woocommerce sometimes needs this for example. I guess it depends on which caching plugin you are using. Which one are you using?
Or, for example WP Super Cache has an area where you can exclude certain pages from caching. Under Settings -> WP super chache -> Advanced -> Accepted Filenames & Rejected URIs.
I have a class called Photo with Parse.Files for different sizes of the photo and a bunch of photo metadata. Everything is working fine until I try to update one of the non-File fields on a fully populated Photo object. If I try to update one of the non-file fields but all of the File fields (Original photo, Resized photo, and thumbnail) are populated, then it takes >3 seconds to do an update of one of the String fields of metadata!
I have checked save performance against every other class I have and they all update as quickly as expected (<1 second). Why would it seem to be checking the File binaries if they are dirty (Or whatever else it might be doing on the Parse server during the save) to make the save operation take so long?
Here is an image of the performance of a Photo Object save in Chrome network browser:
And this is an example of a save for an object of class that just has primitive data types (no files):
Anyone have any insights about what is going on or how I can get around this? 3 seconds is way too long just to update a String field on a Parse Object!
I was able to determine that this was actually my afterSave handler for this class doing unnecessary work.
It was only supposed to be running at initial object creation, but was running every time and sometimes even in a recursive way. Added some logic to achieve the desired behavior and everything looks to be working as expected.
A good point to note is that it looks like the HTTP request for save() will not return until after the afterSave cloud module has completely finished running. This wasn't clear to me until after thorough testing.
I've created a subscription-based system that deals with a large data-set. In its first iteration, it had semi-complicated joins that would execute, based on user-set filters, on every 'data view' page. Each query would fetch anywhere from a few kilobytes to several megabytes depending on the filter range. I decided this was unacceptable and so learned about APC (I had heard about its data-store features).
I moved all of the strings out of the queries into an APC preload routine that fires upon first login. In the same routine, I am running the "full set" join query to get all of the possible IDs for the data set into a $_SESSION variable. The entire set is anywhere from 100-800Kb, depending on what data the customer is subscribed to.
I convert this set into a JSON array and shuffle the data around dynamically when the user changes the filters. In creating the system I wanted it to seem as if the user was moving around lots of data very quickly, with minimal page loading (AJAX + APC when string representations are needed), as they played with the filters.
My multipart question is, is it possible for the user to effectively "cancel" the initial cache/query routine by surfing to another page after the first login? If so, can I move this process to an AJAX page for preloading, or does this carry the same problem? Or, am I just going about all of this in the wrong way? I came up with the idea on my own and I'm worried that I've created an unusable monster.
Also, I've been warned that my questions suck and I'm in danger of being banned. Every question I've asked has come from a position of intelligent wonder, written as well as I knew how at the time, and so it's really aggravating when an outsider votes me down without intelligent criticism. Just tell me what I did wrong and I will quickly fix the problem. Bichis.
Suppose I display a few dozens thumbnails in a few web pages (10 thumbnails per page). I would like to load them as quickly as possible.
Does it make sense to get a few thumbnails with one HTTP request (one thumbnail is ~10K) ? How would you suggest do it with JavaScript?
You can, but you need to jump through a few hoops:
1) Base-64 encode the images on the server as a single file.
2) Send them to the client as a single request blob, via AJAX.
3) Decode the images back into pieces.
4) Use Data-URIs to insert them into the DOM.
...not really worth it.
Regarding network performance it does really make sense.
You could, for example, put a predefinited number of thumbails along in a single image.
On client side you can treat that image like using "css sprite" tecnique
(http://www.w3schools.com/css/css_image_sprites.asp)
If it's important for you to send the images as fast as possible I would consider sending them as a sprite. Unfortunately this may be somewhat difficult on the back end if the provided images may vary. If they are static and the same for every user it is way easier as you can manually prepare the images and the front end code to display the correct image parts.
In combination with the sprite approach it would also be useful to enable progressive/interlaced loading in order to deliver visible results as fast as possible.
I would like to keep the contents of large UI lists cached on the client, and updated according to criterial or regularly. Client side code can then just fill the dropdowns locally, avoiding long page download times.
These lists can be close to 4k items, and dynamically filtering them without caching would result in several rather large round trips.
How can I go about this? I mean, what patterns and strategies would be suitable for this?
Aggressive caching of JSON would work for this, you just hash the JS file and throw it on the end of it's URL to update it when it changes. One revision might look like this:
/media/js/ac.js?1234ABCD
And when the file changes, the hash changes.
/media/js/ac.js?4321DCBA
This way, when a client loads the page, your server-side code links to the hashed URL, and the client will get a 304 Not Modified response on their next page load (assuming you have this enabled on your server). If you use this method you should set the files to never expire, as the "expiring" portion will be dealt with by the hash, i.e., when the JS file does expire, the hash will change and the client won't get a 304, but rather a 200.
ac.js might contain a list or other iterable that your autocomplete code can parse as it's completion pool and you'd access it just like any other JS variable.
Practically speaking, though, this shouldn't be necessary for most projects. Using something like memcached server-side and gzip compression will make the file both small and amazingly fast to load. If the list is HUGE (say thousands of thousands of items) you might want to consider this.
Combres is a good solution for this - it will track changes and have the browser cache the js forever until a change is made, in which case it changes the URL of the item.
http://combres.codeplex.com/
You might consider rather than storing the data locally using jQuery and AJAX to dynamically update the dropdown lists. Calls can be made whenever needed and the downloads would be pretty quick.
Just a thought.
This might be helpful:
http://think2loud.com/using-jquery-and-xml-to-populate-a-drop-down-box/
If its just textual data, you have compression enabled on the web server, and there are less than 100 items, then there may be no need to maintain lists in the client script.
Its usually best to put all your data (list items are data) in one place so you dont have to worry about synchronization.