Why fetch() uses ReadableStream for body of the response? [closed] - javascript

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I was learning fetch() and found out that the body of the response uses something called readableStream. According to my research, readable stream allows us to start using the data once it gets being downloaded from the server(I hope I am correct:)). In terms of fetch(), how can readable stream be useful with fetch(), that is, we anyways need to download all of the data then start using it. Overall, I just cannot understand the point of readable stream in fetch() thus I need your kind help:).

Here's one scenario: a primitive ASCII art "video player".
For simplicity, imagine a frame of "video" in this demo is 80 x 50 = 4000 characters. Your video "decoder" reads 4000 characters, displays the characters in a 80 x 50 grid, reads another 4000 characters, and so on until the data is finished.
One way to do this is to send a GET request using fetch, get the whole body as a really long string, then start displaying. So for a 100 frame "video" it would receive 400,000 characters before showing the first frame to the user.
But, why does the user have to wait for the last frame to be sent, before they can view the first frame? Instead, still using fetch, read 4000 characters at a time from the ReadableStream response content. You can read these characters before the remaining data has even reached the client.
Potentially, you can be processing data at the start of the stream in the client, before the server has even begun to process the data at the end of the stream.
Potentially, a stream might not even have a defined end (consider a streaming radio station for example).
There are lots of situations where it's better to work with streaming data than it is to slurp up the whole of a response. A simple example is summing a long list of numbers coming from some data source. You don't need all the numbers in memory at once to achieve this - you just need to read one at a time, add it to the total, then discard it.

Related

Node.js + websocket - Low-bandwidth game entity ids

I have been working on a game for the past few days and I am currently working on optimizing the websocket frame data. I've changed data type from simple JSON to an arraybuffer, and that part is working fine. Entity data such as position and rotation is sent and received with no problem.
The entity ID is the problem. Currently the client is keeping track of every entity by its ID to store last position for the smooth movement, and so it will continue to be.
Every entity on the server currently has an UUID (ex f2e9f5e2-a810-416e-a1ce-a300a0b7a088). That is 16 bytes. Im not sending that!
And now to my question. How to they do it in big games? Is there any way of getting around this, or a way to generate a unique 2 byte or some other low-bandwith UID?
UPDATE
I need more than 256 IDs, which means that 1 byte is obviously not going to work. 2 bytes gives me 65535 IDs which probably is enough. I also know that iterating through a loop until you find the next free ID is an option, but maybe that is too "costly".

Best Practice To send Big JSON Data to Node JS

Node js : I am taking a survey which have 21 question init and answer should be in written text so the Json became very big. My question is how should I send my data to the backend should I send the whole JSON or I should store it in the file and then send it what is the best practice for this ? (Front End : JQuery And Backend : NodeJS )
Define "very big". Cause I seriously doubt that your users will write answers that weight more then say 50Kb. Note that average kindle ebook weights ~2.5Mb with ~300 pages (https://www.quora.com/What-is-the-average-file-size-of-an-e-book). And even this size is not really big for a request.
Anyway, to improve UX it is worth considering splitting the survey into multiple pages and sending each page to the server one-by-one. For that you have to store those pages on the server side of course (possibly with timeouts in case a user decides he won't continue).

Why would I use check() instead of Match.test()? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
In a production perspective, I do client-side checks (type, length, regex, etc.) on every data field sent to the server using my methods. Of course, I will double check everything on server in the related methods.
Considering that every type or format error case should have been handled by client code, I assume that, on server, it is better to handle errors quietly instead of throwing an explicit error: the data is necessarily coming from a client where original code has been tampered with (if all my clients checks are ok). In practice, I would then use Match.test() (quietly) instead of check() (error thrown)
Is this a good practice to handle server errors quietly every time it happens on server if this kind of error should have been flagged on client first? If not, why?
Besides, I consider keeping track of these quiet errors and auto-block or flag accounts repeating them more than x times. Is that a good idea?
You are right in that a server should never, ever throw an explicit exception on the server without handling it - that will cause your server to crash, and it'll be a nice Denial-Of-Service for everybody.
However, the server should still be able to inform the user if the data is malformed, using a 400 class HTTP error message. Both security and user-friendliness are things that must absolutely be accounted for when dealing with user-inputted data - and with an untrusted client and slow server, both layers should have mechanisms for providing them.

REST : Return complex nested data vs. multiple calls [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
The community reviewed whether to reopen this question 7 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I have a REST api served by NodeJS to an AngularJS fronts.
I work with users :
GET /api/users #Returns all users
POST /api/users #Create new user
GET /api/users/:id #Return a user
PUT /api/users/:id #Edit a user
DELTE /api/users/:id #Delete a user
This is a user :
{
login : "admin"
email : "admin#admin.com"
pass : "hashedpassword"
...
}
My user can belong to groups
GET /api/users/:id/groups #Return the groups of a user
They can also have constraints or can inherits constraints form their groups
GET /api/users/:id/constraints #Return the constraints of a user
GET /api/groups/:id/constraints #Return the constraints of a group
The problem :
I'm making an admin page, displaying All the users, their groups, their constraints.
Should I :
Make many requests in a for loop in the javascript (Angular) front ?
Something like :
$http.get(/api/users).then(function(result) {
result.data.forEach(function(user) {
$http.get('/api/users/'+user.data.id+'/groups).then(function(groups) {
groups.forEach(function(group) {
$http.get('/api/groups/'+group.data.id+'/constraints)
})
})
})
})
Create a path /api/users/withConstraintsAndGroups
That would return a big list of all the users with their groups and their constraints.
I find solution 1 to be very nice, maintenable, easy, and generic but I'm affraid of very bad performance
I find solution 2 to be ugly, hard to maintain, hard to code, not generic but with good performance
Which solution should I choose?
Your question basically boils down to:
Which is better, one big HTTP request, or many small ones?
One thing to keep in mind is the expected network latency (ping time) between your clients and your server. In a high-latency situation with otherwise good bandwidth, many small requests will perform significantly worse than one large one.
Also, having one big request gives you better efficiency for compressing the response, and avoids the overhead of the extra HTTP requests and response headers.
Having said all that, I would still advise you to start with option 1, solely because you stated it is very easy for you to code. And then see if it meets your requirements. If not, then try option 2.
And as a final note, I generally prefer one big request to many small requests because that makes it easy to define each request as a transaction which is either completely applied or rolled back. That way my clients do not run into a state where some of their many small requests succeeded while others failed and they now have an inconsistent local copy of the data my API supplied.
Come on, how much would it cost in performance, based on the thesis that the result will be almost the same, multiple requests vs one massive request. We're in 2016 and unless you're dealing with poor internet connections, you should do it by making multiple requests.
You're developing your app for 99% of the population of the globe or for the 1% part which uses Opera ( for the turbo feature )?
Also, if we are talking about designing a REST api, being consistent is probably the main idea of such kind of API. In your case, if you write the second method you'll have to write something similar into the rest of your code to keep consistency of all controllers, which can't be done.
Also, an API is an API and should not be modified when a front end application is being built on that API. Think at the situation when you have 10 apps that are requesting your API, but you're in the same situation you presented, but you need a different response for each. You're going to add new methods to the API ? That's a bad practice
Routing in a REST api should be done accordingly to the logical resources you have ( objects that can be manipulated with HTTP verbs ) :
Examples:
* when you have a relation between entities
/users/3/accounts // should return a list of accounts from user 3
/users/3/accounts/2 // should return account 2 from user 3
* custom actions on your logical resources
/users/3/approve
/accounts/2/disable
Also a good API should be able to offer partial requests ( for example adding some querystrings parameters to an usual request : users/3?fields=Name,FirstName ), versioning, documentation ( apiDocs.js is very userful in this case ), same request type ( json ), pagination, token auth and compression ( I've never done the last one :) )
I recommend modifying your endpoint for serving user resources to accept (through a query parameter) what associated resources should be included as a part of the respose:
/api/users?includes=groups&includes=constraints
This is a very flexible solution and if your requirement expands in future, you can easily support more associated resources.
If you concerned about how to structure the response, I recommend taking a look at JSON:API.
An advantage of using JSON:API is that if multiple users share same groups/constraints then you will not have to fetch multiple representations of the same entity.
You're asking
which is better...
and that's not acceptable according to SO rules, so I will assume your question lies around what is REST supposed to do in certain cases.
REST is based on resources, which, in their own, are like objects with their own data and "accessors/getters".
When you're asking for /api/users/:id/groups, you're telling that you want to access, from the users resource/table, a specific user with an id and, from that user, the list of groups he/she belongs to (or owns, or whatever interpretation you want). In any case, the query is very specific for this groups resource (which, in REST terms, is unique, because every resource is universally directionable through its URL) and should not "collision" or be misinterpreted with another resource. If more than one object could be targeted with the URL (for example, if you call just /api/groups), then your resource is an array.
Taking that in consideration, I would consider (and recommend) always returning the biggest selection that matches the use case that you specify. For example, if I would create that API, I would probably do:
/users list of all users
/users/:id a specific user
/users/:id/groups groups to which the specific user has joined
/users/groups list of all groups that has at least one user
/users/groups/:id description of a particular group from /users/groups (detail)
etc...
Every user case should be completely defined by the URL. If not, then your REST specification is probably (and I'm almost sure) wrong.
==========
So, answering your question, considering that description, I would say your answer is: it depends what you need to do.
If you want to show every single constraint in the main page (possible, but not too elegant/crap UI), then yes, the first approach is fine (hope your end users don't kill you).
If you want to only ask for resources on demand (for example, when popping up info on mouse hover or after going to a specific section in the admin site), then your solution is something like the second one (yes, that's a big one... only if you really need that resource, you should use it).
Have you considered the path /api/users/:id/constraints on demand?
BTW, withConstraintsAndGroups is not a REST resource. REST resources are nouns (constraints or groups, but not both). Not verbs or adjectives.
I doesnt seem to me that for the admin page performance is too much of an issue. The only difference between the pretty much is, that in #1 you have 3 api calls, with #2 only one. The data should be the same, the data should be reasonable, not extremely huge. So if #1 is easier to code and maintain you should go with that. You should not have any performance issues.

Loading lookup table from server - Efficient Format [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
If I had a python script that created a lookup table which could be read by a webpage (javascript and maybe ajax), what is the most efficient (in speed and if possible size) format to use?
The lookup-table could have 2000 rows.
Here is a data example:
Apple: 3fd4
Orange: 1230
Banana: 942a
...
Even though this is primarily opinion based, I'd like to roughly explain to you what your options are.
If size is truly critical, consider a binary format. You could even write your own!
With the data size you are presenting, we are probably talking megabytes of data (depending on the field values and no. of columns), so the format is of importance. Now, a simple csv or plain text file - provided it can be read by the webpage - is very efficient in terms of the additional overhead: simply seperating the values by a comma and putting the table headers on line 1 is very, very concise.
JSON would work too, but does maintain a somewhat larger overhead than just a raw (text) data dump (like a csv would be). JavaScript object notation is often used for data transfers, but really, in the case of raw data it does not make much sense to coerce it into such a format.
Final thoughts: put it into a relational database and do not worry about it any more. That is the tried and tested approach to any relational data set, and I do not really see a reason you should deviate from that format.

Categories