I have a class called Photo with Parse.Files for different sizes of the photo and a bunch of photo metadata. Everything is working fine until I try to update one of the non-File fields on a fully populated Photo object. If I try to update one of the non-file fields but all of the File fields (Original photo, Resized photo, and thumbnail) are populated, then it takes >3 seconds to do an update of one of the String fields of metadata!
I have checked save performance against every other class I have and they all update as quickly as expected (<1 second). Why would it seem to be checking the File binaries if they are dirty (Or whatever else it might be doing on the Parse server during the save) to make the save operation take so long?
Here is an image of the performance of a Photo Object save in Chrome network browser:
And this is an example of a save for an object of class that just has primitive data types (no files):
Anyone have any insights about what is going on or how I can get around this? 3 seconds is way too long just to update a String field on a Parse Object!
I was able to determine that this was actually my afterSave handler for this class doing unnecessary work.
It was only supposed to be running at initial object creation, but was running every time and sometimes even in a recursive way. Added some logic to achieve the desired behavior and everything looks to be working as expected.
A good point to note is that it looks like the HTTP request for save() will not return until after the afterSave cloud module has completely finished running. This wasn't clear to me until after thorough testing.
Related
I'm inexperienced at JavaScript, and don't know how to optimize things for my situation.
I've written an autocomplete function using the JQueryUI Autocomplete plugin. The source for the completion is a JSON array, holding a few thousand items, that I load from my same server. This autocomplete will be attached to a search box that will be on every page of my site, so it'll get populated a lot; I don't want to request the same array every time anyone hits any page. The completion depends on database values, so I can't just put the array in static form in the code. However, it doesn't have to be perfectly synced; caching it for some amount of time would be fine.
Right now, I'm loading the array with $.getJSON. It seems that using an actual remote source is meant to be an AJAX thing where the server does the actual search itself as you type; I think this is probably overkill given that there are only a few thousand, rather than millions, of items--I don't want to fire a zillion requests every time someone types into the search box.
What is the correct way of handling this? I'm totally unfamiliar with how caching would work in JS, or if there's some built-in way to accomplish a similar thing.
I am working on a js app which loads some xml files stored in another server.
I am using ofcourse XHR to load and access their content.
As a first level I just acces the top level file which contains the name of the elements and I show them in a thumbnail format. My idea is to show the title of the files and the number of items that every element contain. Also and in the second level when the user clicks on one of the elements(thumbnails) I load the content of the clicked element.
So I am wandering if my process is performant or not since I load or access the files two times the first when I show the number of items an element contain and in a second time when the user clicks on the element and iI shows its content.
Please what do you think? Is there another good solution?
Thanks a lot for your answers :)
Making two HTTP requests where you could make one is usually going to be less than ideal. It's certainly going to take a bit longer, although the "bit" may be very small indeed if the browser can satisfy the second request from cache without even a revalidation query to the source server.
It's up to you what to do. If you keep a copy of the response after using it to create the thumbnail, you're using more memory; if you don't and make a second request to the server, you're doing something that could be slightly slower.
Takes your money, makes your choice.
I have a bunch of Parse objects (Can be as high as 200) that need to be updated with a common field set to a common (short) string value. I tried using a loop with save on each one, but then it spiked my API usage beyond the limit as you can imagine when there were hundreds of them.
So, I looked into how to use the saveAll to do a batch from the Javascript client. I got the code itself working fine and it is trying to update all of the files as expected. Now, the problem with this appears to be that it is doing a batch of PUT's inside a single batch POST to https://api.parse.com/1/batch and while it is treating this as a single HTTP operation from the client, the parse.com servers treat this as a single operation in terms of the timeout limit.
If I have more than about 5 files in the batch it will time out (Giving an error 124) since for some reason each individual save in the batch appears to take ~3 seconds according to chrome's network browser. How can a single save take so long?
Also, this begs the question of why it is timing out at all since each save should be a separate API call (As shown in the requests internal to the batch operation). Since I am running this batch save from the client, shouldn't there be no timeout limits anyhow as is the case in cloud code (15 seconds there)?
Can someone help me understand this? It is a huge bottleneck and I cannot figure out any other workaround. Seems like saving a batch of 5+ objects (With only a single string field that is dirty) shouldn't be so arduous!
Since the objects are all being updated with the same string to the same field have you considered using a collection? As the docs say you can create a new subclass using either a model class, or a particular Parse.Query. The code to update is simplistic:
collection.reset([
{"name": "Hawk"},
{"name": "Jane"}
]);
https://parse.com/docs/js_guide#collections
I've created a subscription-based system that deals with a large data-set. In its first iteration, it had semi-complicated joins that would execute, based on user-set filters, on every 'data view' page. Each query would fetch anywhere from a few kilobytes to several megabytes depending on the filter range. I decided this was unacceptable and so learned about APC (I had heard about its data-store features).
I moved all of the strings out of the queries into an APC preload routine that fires upon first login. In the same routine, I am running the "full set" join query to get all of the possible IDs for the data set into a $_SESSION variable. The entire set is anywhere from 100-800Kb, depending on what data the customer is subscribed to.
I convert this set into a JSON array and shuffle the data around dynamically when the user changes the filters. In creating the system I wanted it to seem as if the user was moving around lots of data very quickly, with minimal page loading (AJAX + APC when string representations are needed), as they played with the filters.
My multipart question is, is it possible for the user to effectively "cancel" the initial cache/query routine by surfing to another page after the first login? If so, can I move this process to an AJAX page for preloading, or does this carry the same problem? Or, am I just going about all of this in the wrong way? I came up with the idea on my own and I'm worried that I've created an unusable monster.
Also, I've been warned that my questions suck and I'm in danger of being banned. Every question I've asked has come from a position of intelligent wonder, written as well as I knew how at the time, and so it's really aggravating when an outsider votes me down without intelligent criticism. Just tell me what I did wrong and I will quickly fix the problem. Bichis.
Coming from Python/Java/PHP, I'm now building a website. On it I want a list of items to be updated in near-realtime: if items (server side) get added to or deleted from the list, this should be updated on the webpage. I made a simple API call which I now poll every second to update the list using jQuery. Because I need some more lists to be kept updated on the same page I'm afraid this will turn into more than 10 server calls per second from every single open browser, even if nothing gets updated.
This seems not like the logical way to do it, but I don't really know how else to do it. I looked at Meteor, but since the webpage I'm building is part of a bigger system I'm rather restricted in my choices of technology (basic LAMP setup).
Could anybody enlighten me with a tip from the world of real-time websites on how to efficiently keep a list updated?
You can use WebSocket(https://code.google.com/p/phpwebsocket/ ) technology.
but php is not the best language for implement it
A way to work this is using state variables for the different types of data you want to have updated (or not).
In order to avoid re-querying the full tables even if the data set in them has not changed in relation to what a particular client has displayed at any given time, you could maintain a state counter variable for the data type on the server (for example in a dedicated small table) and on the client in a javascript variable.
Whenever an update is done on the data tables on the server, you update the state counter there.
Your AJAX polling calls would then query this state counter, compare it to the corresponding javascript variable, and only do a data-update call if it has changed, updating the local javascript variable to what the server has.
In order to avoid having to poll for each datatype separately, you might want to use an JS object with a member for each datatype.
Note: yes this is all very theoretical, but, hey, so is the question ;)