Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
If I had a python script that created a lookup table which could be read by a webpage (javascript and maybe ajax), what is the most efficient (in speed and if possible size) format to use?
The lookup-table could have 2000 rows.
Here is a data example:
Apple: 3fd4
Orange: 1230
Banana: 942a
...
Even though this is primarily opinion based, I'd like to roughly explain to you what your options are.
If size is truly critical, consider a binary format. You could even write your own!
With the data size you are presenting, we are probably talking megabytes of data (depending on the field values and no. of columns), so the format is of importance. Now, a simple csv or plain text file - provided it can be read by the webpage - is very efficient in terms of the additional overhead: simply seperating the values by a comma and putting the table headers on line 1 is very, very concise.
JSON would work too, but does maintain a somewhat larger overhead than just a raw (text) data dump (like a csv would be). JavaScript object notation is often used for data transfers, but really, in the case of raw data it does not make much sense to coerce it into such a format.
Final thoughts: put it into a relational database and do not worry about it any more. That is the tried and tested approach to any relational data set, and I do not really see a reason you should deviate from that format.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed last month.
Improve this question
I am working on a fairly large project and I want to make some changes to the code base to support internationalisation in the future. I have stored most strings in a separate file. But is there a good way to store dynamic strings in React?
Using interpolated strings it is very hard to store in constant. I tried using functions and string interpolations but I feel this is not the correct way.
Some things that I tried:
const STRING = (context: string) => `This is a ${context}`
const STRING2 = `This is a ${context}`
But in second approach we need to have the context defined somewhere and maybe it needs to be passed in the translations file itself.
For context, I am using TypeScript and I am planning to use React i18n for internationalisation.
I tried splitting the dynamic strings in separate constants which work, but is very inefficient especially while translating.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I was learning fetch() and found out that the body of the response uses something called readableStream. According to my research, readable stream allows us to start using the data once it gets being downloaded from the server(I hope I am correct:)). In terms of fetch(), how can readable stream be useful with fetch(), that is, we anyways need to download all of the data then start using it. Overall, I just cannot understand the point of readable stream in fetch() thus I need your kind help:).
Here's one scenario: a primitive ASCII art "video player".
For simplicity, imagine a frame of "video" in this demo is 80 x 50 = 4000 characters. Your video "decoder" reads 4000 characters, displays the characters in a 80 x 50 grid, reads another 4000 characters, and so on until the data is finished.
One way to do this is to send a GET request using fetch, get the whole body as a really long string, then start displaying. So for a 100 frame "video" it would receive 400,000 characters before showing the first frame to the user.
But, why does the user have to wait for the last frame to be sent, before they can view the first frame? Instead, still using fetch, read 4000 characters at a time from the ReadableStream response content. You can read these characters before the remaining data has even reached the client.
Potentially, you can be processing data at the start of the stream in the client, before the server has even begun to process the data at the end of the stream.
Potentially, a stream might not even have a defined end (consider a streaming radio station for example).
There are lots of situations where it's better to work with streaming data than it is to slurp up the whole of a response. A simple example is summing a long list of numbers coming from some data source. You don't need all the numbers in memory at once to achieve this - you just need to read one at a time, add it to the total, then discard it.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Randomly experimenting with JavaScript (ES6) and reading its documentation I found out that Set maintains the insertion order of its elements.
I wonder what is the rationale behind this decision? I always thought of a set as an unordered collection. Requiring anything more would lead to a more costly implementation, whereas, in my opinion, this feature is mostly unused.
Ordered Sets are very useful, consider for example:
unique_elements_in_order = [...new Set(some_array)]
In an "unsorted set" language, like python, you'd need a separate OrderedSet implementation for this to work.
Yes, theoretically, sets are not ordered, but mathematical abstractions, like sets, functions, numbers etc, are only tangentially related to similarly named objects we use in programming. Set is just a special kind of data structure, and it's up to the language designers to define its specific properties, like "Sets are in the insertion order" or "Sets can only contain hashable objects" etc.
As to the committee's motivation, some googling brought this
Mark S. Miller:
Mon Feb 13 22:31:28 PST 2012
There are many benefits to determinism. E started with non-deterministic
iteration order, which opens a covert channel hazard. I initially changed
to deterministic order merely to plug this leak. Having done so, I found it
had many software engineering benefits. For example, it becomes much easier
to write regression tests and to reproduce bugs by re-execution. In my
implementation, it also had a minor additional space and time cost. Tyler's
Waterken tables show that even the minor runtime costs I was paying were
unnecessary.
Let's not introduce yet another source of non-determinism for the sake of
an unmeasured efficiency. Let's measure, and if the cost turns out to be
high after all, then let's reconsider determinism.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
JavaScript is said to be a "loosely-typed" language. This is due to the fact that the runtime allows operations to be performed on operands of different types (via coercion):
var number = 6;
var bool = true;
var result = number + bool; //result is 7
Coming from a mostly statically-typed, strongly-typed background, I am having a hard time reasoning about the benefits of this type of approach. Sure, it can make for some pretty concise syntax, but it also seems like it could cause a nightmare when trying to track down bugs. So, besides conciseness, what are some of the benefits of loose typing and implicit type conversions?
Loosely typed languages have a number of differences which can be taken as advantages:
There is no need of interfaces. As long as an object has the method name that you need, call that method. Not using interfaces can simplify coding and reduce code size.
There is no need of generics, for very similar reasons.
"by type" function overloads are handled more simply If a function needs a string parameter, then just cast the incoming value to a string. If type checking is needed, it can be added there.
We don't have, or need, classes. The [almost] everything is an object makes passing values around much easier. No need to auto-box, no need to cast values coming out.
Objects are easily extended without breaking code. You can create an array then drop replace the indexOf method to use one uses the binary search. The end result is smaller, and IMHO, cleaner code.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am trying to develop products filters for an online store I am working on. An example of what I mean is http://www.riverisland.com/men/just-arrived. I have managed to get a JavaScript to populate the URL when the sizes are clicked on but failed to get them remove value from URL when unchecked.
My main question here is this. Assuming I have my URL as:
http://127.0.0.1/shop/dresses/?s=1&s=2&s=3
How do I get my PHP to extract the values from the URL?
How do I format a SQL query to search the values gotten from the URL using any sample query?
An easier solution is this.
Format your URL like http://127.0.0.1/shop/dresses/?s=1,2,3 as suggested by #Andrey.Popov. Then do the below.
if(isset($_GET['s']) && !empty($_GET['s']))
{
$e = sanitizeFunction($_GET['s']);
$d=explode(',',$e);
}
$d now has all your $_GET['s'] values.
That's the easier way I have figured out and it works!
In order to benefit from $_GET and other superglobals you have to follow the rules explained at Variables From External Sources. Since you've chosen to have several parameters with the same name and they do not contain properly paired square brackets you're basically on your own. The manual steps you must reproduce include:
Extract the raw query string from $_SERVER['QUERY_STRING'], e.g.:
$query_string = filter_input(INPUT_SERVER, 'QUERY_STRING');
Parse out the string. As far as I know, there aren't built-in functions that do exactly this so I'd either google for a good third-party library or write a simple parser with regular expressions or good old explode().
Decode the URL-encoded values with urldecode()