Performance - How big is too big for an ajax request? - javascript

I have a web app that's constantly sending and requesting JSON objects to/from the server. These JSON objects can get as big as 20-40kb, and these requests might happen once every 5 to 20 seconds, depending on the user interaction.
I decided to keep my processing on the client side, so the user can use my web app without having to keep an active internet connection, but I need to sync the data to the server every once in a while. I couldn't think of a better solution than storing/processing data in the client as javascript objects and eventually saving them as json on a server. (this would also enable me to serve theses objectes with an API to mobile applications in the future)
I'd like to know how having these relatively large JSON data back and forth could make my application worse in performance, in comparison to just sending simple ajax request of a few bytes and doind all the processing on the server, and how could I make this more optimized?

20-40Kb size JSON objects for requests is pretty small according to tests done by Josh Zeigler, where the DOM Ready even took less than 62milliseconds (MAX, in IE) across 4 Major browsers for a 40KB JSON payload.
The tests were done on a 2011 2.2GHz i7 MacBook Pro with 8GB of RAM.
Here's the detailed test and test results: How Big is TOO BIG for JSON? Credit: Josh Zeigler

Related

Specify socket send buffer size when performing HTTP POST request

Some background: I am working with legacy code, and am attempting to upload a binary file (~2MB) to an embedded microhttpd web server via an HTTP form (POST request). Lately I've noticed that the upload speed from Windows 10 machines is significantly slower than from non-Windows 10 machines: the upload will send a handful of bytes at a time (about 6-7 chunks of ~1500 bytes each) and will then pause, sometimes for 30-60 seconds, before sending another handful of bytes. The decrease in speed caused by this issue renders the whole upload process unusable.
I performed some debugging on the embedded server and found that it was indeed waiting for more data to come in on the socket created for the POST request, and that for some reason this data was not being sent by the client machine. After some analysis in Wireshark on Win10 vs non-Win10 traffic, the messages I am seeing appear to tally up with the issue described by Microsoft here: https://support.microsoft.com/en-gb/help/823764/slow-performance-occurs-when-you-copy-data-to-a-tcp-server-by-using-a.
Specifically, I am seeing that in the case of Windows 10, the first TCP packet sent to the embedded web server is indeed "a single send call [that] fills the whole underlying socket send buffer", as per the Microsoft article. This does not appear to be the case for non-Windows 10 machines. Hence, I need to be able to set up my sockets so that the web client does not send so much data as to fill up the receive buffer in one packet.
Unfortunately, major modifications to the web server itself (aside from little config tweaks) are out of the question, since the legacy code I'm working with is notoriously coupled and unstable. Therefore, I'm looking for a way to specify socket settings via JavaScript, if this is possible. I'm currently using the JQuery Form plugin, which operates on top of XMLHttpRequests. Since I have complete control over both the JavaScript page and the embedded web backend, I can hard-code the socket buffer sizes appropriately in both cases.
Is there a way to tweak low-level socket settings like this from JavaScript? If not, would there be another workaround for this issue?
There is no way you can do the TCP stack specific tuning you need from inside Javascript running inside a browser on the client side. It simply does not allow this kind of access.

Correct and efficient use of websockets?

I'm currently making plans for a real-time web application. More specifically, it will be an idle game where players can interact with each other. I will use a server storing messages sent from players to other players, which allows for communication when any party is not online.
The game page will load all game assets, such as images and static game data. There will be no need in reloading the page in any time. I'm planning to send more specific game data over web sockets (such as world map data). This may result in somewhat larger packets. I expect these packets to be in a range from very small (chat messages, 2kb) to very large (world data, 100~300 kb). I expect about 3 chat messages to be exchanged per minute per user, to be sent to a number of users (max 10~20). I expect that a user will receive a large set of data every minute or every two minutes.
I'm mainly interested in finding out what kind of performance-related issues may occur, and how to deal with these. A few questions coming to my mind are:
Is it smart to use websockets in this way?
Will it be better to use one socket or to use two sockets (one for small data fragments, one for big data fragments)? Socket connections will be made to the same server.
What are hazards to keep in mind? (I do not mean backward compatability issues here)
Compared to AJAX, do you expect differences in server load? Will these be significant?
What are performance effects to the client and server?

Why make the server push data?

Why make the server push data to get notifications, like using SingleR while it can be made client side?
Using a javascript timing event, that checks for recent updates at specified time intervals user can get notifications as long as he remains connected to the server.
So my question is why do more work at the server that the client can already do?
It's not more work to the server, it's less work. Suppose that you have 10000 clients (and the number could easily be in the 100K or even millions for popular web-sites) polling the server every X seconds to find out if there's new data available for them. The server would have to handle 10000 requests every X seconds even if there's no new data to return to the clients. That's huge overhead.
When the server pushes updates to the clients, the server knows when an update is available and it can send it to just the clients this data is relevant to. This reduces the network traffic significantly.
In addition it makes the client code much simpler, but I think the server is the critical concern here.
First if you didn't use server push you will not get instant update for example you can't do chat application, second why bothering the client to do job that it is not designed to do it? third you will have performance issue on the client cause like #Ash said server is a lot more powerful than a client computer.

WebSockets vs XHR for large amounts of data

I am running SocketIO on NodeJS and I don't care much about wide browsers support as it's my pet project where I want to use all the power of new technologies to ease the development. My concern is about how I should send large amounts of JSON data from server to client and back. Well, these amounts are not as large as could be for video or image binary data, I suppose not larger than hundreds of kilobytes per request.
Two scenarios I see are:
Send a notification via WebSockets from server to client that some data should be fetched. Then client code runs a regular XHR request to server and gets some data via XHR.
Send the whole data set over WebSockets from server to client. In this case I don't need to run any additional requests - I just get all the data via WebSockets.
I saw first case in Meteor.js, so I wondered the reasons of it.
Please share your opinion.
Websockets should support large data sets (up to 16 exabyte in theory), so from that point of view it should work fine. The advantage of XHR is that you will be able to observe progress over time and in general better tested for large data blocks. For example, I have seen websocket server implementations which (thinking retrospectively) wouldn't handle large data well, because they would load the entire data into memory (rather than streaming the data), but that's of course not necessarily the case for socket.io (dunno). Point in case: try it out with socket.io whilst observing memory usage and stability. If it works, definitely go with websockets, because long term the support for big data packages will only get better and definitely not worse. If it turns out to be unstable or if socket.io can't stream larger data files, then use the XHR construct.
Btw, just a google search turned up siofile, haven't looked into it that much, but it might be just the thing you need.

Transferring large amounts of json over http

I have large amounts (gigabytes) of json data that I would like to make available via a restful web service. The consumer of the data will be another service, and it will all be happening on a server (so the browser is not involved). Is there a practical limit on how much data can be transferred over http? Will http timeouts start to occur, or is that more a function of a browser?
There's no size limit for HTTP body. Just like downloading a huge file through web browser. And timeout is a setting of socket connection, over which HTTP is built, so it is not a browser specified feature.
However, I've met the same issue with transporting quite large json object. What need to be considered are network load, serialize/deserialize time, and memory cost. The whole process is slow (2GB of data, via intranet, using JSON.NET and some calculation we take 2-3 minutes) and it costs quite large memory. Fortunately, we just need to do that once everyday and it is a back end process. So we don't pay more attention on it. We just use sync mode for HTTP connection and set a long timeout value to prevent timeout exception (Maybe async is a good choice).
So I think it depends on your hardware and infrastructure.

Categories