I have a csv file of about 10k rows x 25 columns. The csv contains information of bus routes, stops, etc. I have a select box with all the routes and the user will be able to pick a single route to show on the map, and then they will be able to click on individual stops (another select box) to get a closer look on the map. I am wanting to know what will be the best way to parse and structure to store this information and be able to perform fast queries (database?), and how should I store the result of the query (array, json object, dictionary, data Table?). I won't need all columns every time, so I will pick the useful columns to make the query a little faster.
Each time a user will select a different route, I will make a query to get all the stops and other relevant information and loop through the data to display it on the map (maybe store results of last 5 queries?). What will be the best way to store this result? Showing the specific stop information won't be too big of a deal since it will be a smaller subset of the already queried results.
Let me know if you need any additional information that will assist with the answers.
Google releasd a public scheme called gtfs which is a transport data structure. You would ideally use a graph data structure. I have found neo4j a good option.
See this github project for an example of how to use a graph database for this purpose:
https://github.com/tguless/neo4j-gtfs
Related
So, i have googled and googled and googled. Maybe I need to improve my google skills.
Maybe the answer already exists somewhere on this page but I have not found it. Anyway.
So I am dealing with a csv file which shows date, transacation category, location and the amount of the transaction. In that order.
So what I need to do is to present this information dynamically in an HTML table. The user should be able to select the location from a dropdown menu and the table should be updated with the corresponding information about transactions done from that location.
I am able to present the overall information from the csv file, but when making it truly dynamical Im struggeling.
So I have the information stored in a 2d array [date, transaction category, location, transaction category].
So my real problem is, how do I make this truly dynamical. Meaning, how can I update the tag with without hardcoding anything. I want it to be independet of what csv file I am uploading.
I'm thinking that it would be good to have a function that loop through the array and split out the user selected array on an onchange event and then send the selecte array to output. But I cant quite imagined exactly how this would be done...
Thankfull for any suggestions. :)
Honestly, I would turn it into JSON first and go from there. One of my favorite parsers is csvtojson. It's pretty simple to use.
I have a table with a thousand records in it and I want to do a google like search full-text/fuzzy search.
I read about MySQL v8's Full-Text search and let's say we don't have that functionality yet.
There is this JavaScript library called Fuse.js that do fuzzy-search which is what I need.
I can combine it by creating a API that returns the table data in JSON format and then pass it to Fuse.js to do a fuzzy-search.
Now, I think it's not recommended to load all data from table every time someone wants to search.
I read about Redis, and the first thing that came in my mind is to save all table data in Redis using JSON.stringify and just call it every time instead of querying the database. Then whenever a data is added in the table, I will also update the contents of the data in Redis.
Is there a better way to do this?
That is a very common caching pattern.
If you need a more efficient way to store and retrieve your JSON to/from Redis you might want to consider one of the available Redis Modules.
e.g.
RedisJSON allows you to efficiently store, retrieve, project (jsonpath) and update in place.
RediSearch allows you to have full text search over Redis Hash and efficiently retrieve data according to the user's query.
Last
RedisJSON2 (aka RedisDoc) combines both modules above, meaning efficient JSON store and retrieve with Full Text support
I want to write data into a specific location in the database. Let's say, I have a couple of users in the database. Each of them has their own personal information, including their e-mails. I want to find the user based on the e-mail, that's to say by using his e-mail (but I don't know exactly whose e-mail it is, but whoever it is do something with that user's information). To be more visible, here is my database sample.
Now, while working on one of my javascript files, when the user let's say name1 changes his name, I update my object in javascript and want to replace the whole object under ID "-LEp2F2fSDUt94SRU0cx". To cut short, I want to write this updated object in the path ("Users/-LEp2F2fSDUt94SRU0cx") without doing it by hand and just "knowing" the e-mail. So the logic is "Go find the user with the e-mail "name1#yahoo.com" and replace the whole object with his new updated object". I tried to use orderByChild("Email").equalTo("name1#yahoo.com").set(updated_object), but this syntax does not work I guess. Hopefully I could explain myself.
The first part is the query, that is separate from the post to update. This part is the query to get the value:
ref.child('users').orderByChild("Email").equalTo("name1#yahoo.com")
To update, you need to do something like this once you have the user id from the query result:
ref.child('users').child(userId).child("Email").update(newValue);
firebase.database.Query
A Query sorts and filters the data at a Database location so only a
subset of the child data is included. This can be used to order a
collection of data by some attribute (for example, height of
dinosaurs) as well as to restrict a large list of items (for example,
chat messages) down to a number suitable for synchronizing to the
client. Queries are created by chaining together one or more of the
filter methods defined here.
// Find all dinosaurs whose height is exactly 25 meters.
var ref = firebase.database().ref("dinosaurs");
ref.orderByChild("height").equalTo(25).on("child_added", function(snapshot) {
console.log(snapshot.key);
});
I am working on a project in Meteor which uses ElasticSearch as a search engine. I need the search feature on the site to allow 'stacking' searches. So, for instance, one can search for a file that a user in a certain 'group' uploaded by 'stacking' the user's name, followed by the group name and ending with the file name or some content in the file.
Now, on the MongoDB database the group, user, and files would be stored in separate collections and be related to each other through Ids. However, ElasticSearch uses a distributed datastore where everything is 'flat'. This makes it necessary to denormalize data/do application-side joins/etc. (https://www.elastic.co/guide/en/elasticsearch/guide/current/relations.html).
My question is: which method would be the best...
Denormalize data, use nests, etc.
--> So, when rivering data to the elasticsearch datastore, I would make copies of the data and replace every parent element with a new one which has the data added to it.
FOR EX. If someone comments on let's say a post in a group. The server would have to add to the general list of comments + find the post object, append the comment to it, and re-add the post object to the database + update the group object which contains the post object which should contain the comment + do the same for a user object (since I want to be able to stack searches on groups, users, etc.).
Basically When ever something is added or deleted, I'd have to update every object in the database that relates to it.
Run multiple elastic search queries (https://www.elastic.co/guide/en/elasticsearch/guide/current/application-joins.html) to retrieve the data I want.
Just perform search queries on each de-centralized collection, and use javascript on the server-side to compare the arrays and produce the search results.
** Note: this is for scaling up to a relatively mid-level load/usage. So around hundreds-thousands of instances of data to search through. Although, if this can work larger scale (millions), that would be great!
Please correct me if my understanding of anything is wrong, and thank you for reading through all this!
I am building a page, which depending on a date range, displays between 0 and a couple hundred rows. When the user enters the page it loads all rows and displays them, the user can then filter the data to his needs. This seems reasonable fast in chrome but IE8 becomes quite slow at some point. (Unfortunately IE8 is the Browser that counts)
Say I need the entire data at page load, but only want to display a subset. Whats the best way to do that?
1.) Build a DOM String and add only the needed rows to the "real" DOM.
2.) Save the data in localStorage.
3.) Take the needed data from the Server produced JSON Object.
4.) ???
Or is it always better to hit the server with a specified query and return only the needed data?
On page load render all the rows in the DOM and print the necessary fields of the data in JSON array.
When filter criteria changes, filter the data in JSON, and then using the unique identifiers in JSON hide the rows in the table (only hide, not remove). This way You wont have to rerender existing rows.
If You choose the ajax way though, the fastest way is to render the HTML on the server side, then simply replace the content of the table with it. This way the browser renders the representation from the given string, u dont have to iterate through a JSON array and render it one by one. Its drawback maybe the network latency and bandwidth.
Hope it helps decide