Making pagination work with its count - javascript

I have a question about how to have the pagination work in server side having mysql as database. I have figured most of it out and it works perfectly fine but i was thinking is there any way to have the count query eliminated. Let's say user searches for the word jack and the results count are 25 paginated by 10 per page. my paginator needs to know the amount of all rows to show correct amount in paginator. So the way i am doing it now is first i use a sql query like this to have the count of all the rows which correspond to that criteria:
"SELECT COUNT(id) AS count FROM " + table + query
Then i do another query against database like this that uses LIMIT and OFFSET options to have that exact page:
"SELECT * FROM " + table + query + " LIMIT ? OFFSET ?"
Then i return two objects to client. first the count of all rows and the seconds the rows user needs to see now. My question is is this the most efficient way to do this or is there any better way to do it too?

You can achieve this with one query, but it will have burden on outputted data i.e. if you limit 1000 records for example, then total_records will show the number 1000 times with all rows in the result set. But at the same time, it will reduce 1 query:
SELECT
column1,
column2,
column3,
(SELECT COUNT(id) FROM `table`) AS total_records
FROM
`table`
LIMIT 0, 10

I didn't see anything wrong with your approach (although you can send the query to database in one trip). With the traditional way of pagination in database, you must know the total records, so it's just how to get the count.
improvements are mostly to do it in a different way.
Improvement 1: infinite scroll, this is get ride of pagination. May not be what you wanted, but we are seeing more and more website adopting this way. Does the user really need to know how many pages for a free text search?
Improvement 2: use ElasticSearch, instead of database. It's built for free text search and will definitely perform better than database. You can also get count (hits) and pages in one search request.

Related

Socrata, SODA, REST, JS: How to query number of rows?

I'm trying to figure out how to query a resource to see how many rows it has before I request the entire resource at once, or whether I use paging to bring back rows in batches.
For example, this resource here:
https://data.cityofnewyork.us/Transportation/Bicycle-Routes/7vsa-caz7
1) In cases where I know the number of rows, I can use the $limit parameter to ensure I get back everything. For example, this dataset has about 17,000 rows, so giving it a $limit of 20000 gets all of them.
For example:
https://data.cityofnewyork.us/resource/cc5c-sm6z.geojson?$limit=20000
also...
2) I thought maybe to make a metadata call, but while this request here returns metadata, number of rows is not part of it:
https://data.cityofnewyork.us/api/views/metadata/v1/cc5c-sm6z
However, I would like to know how many rows are in the dataset before I decide how to request them: all at once with the $limit parameter, or paging with the $limit and $offset parameters.
Ideas?
One method could be to count the rows using the COUNT function in the API call.
Should note that YMMV on this approach. Generally, the maximum is around 50,000 rows before you need to switch to paging. Generally, I'll always throw a 50k limit and have paging ready if it's larger.

Firebase: best way to order and filter multiple childs

So as we know firebase won't let order by multiple childs. I'm looking for a solution to filter my data so at the end I will be able to limit it to 1 only. So if I won't to get the lowest price it will be something like that:
ref.orderByChild("price").limitToFirst(1).on...
The problem is that I also need to filter it by dates (timestamp)
so for that only I will do:
.orderByChild("timestamp").startAt(startValue).endAt(endValue).on...
So for now that's my query and then I'm running on all results and checking for that one row that has the lowest price. my Data is pretty big and contains around 100,000 rows. I can changed it however I want.
for the first query that gets the lowest price but all timestamps causes that the returned row might be the lowest price but not in my dates range. However this query takes ONLY 2 seconds compared to the second one which takes 20 including my code to get the lowest price.
So, what are your suggestions on how to do it best? I know I can make another index which contains the timestamp and the price but those are different data values and it makes it impossible.
full data structure:
country
store
item
price,
timestamp
just to make it even more clear, I have 2 inner loops which runs over all countries and then over all stores. so the real query is something like that:
ref.child(country[i]).child(store[j]).orderByChild("timestamp").startAt(startValue).endAt(endValue).on...
Thanks!

Selecting rows and updating user's info at the same time

I know this can be done easily with javascript/php, but I'm wondering if this is possible with pure MYSQL and one query.
So imagine this:
some rows are inserted with a number, that user has typed in. (for example,5,7,1 and 2)
So the row would now look like this:
I have one more table which contains user's data:
Now: is it possible to increase user's points by 10 if the number he had placed is above 5?
Thank you for help,
Nedas
Join the tables to find the corresponding rows, and then update them.
UPDATE points_table AS p
JOIN numbers_table AS n ON p.user = n.user
SET p.points = p.points + 10
WHERE n.number > 5

Select top ten entries of multiple rows in MySQL

I've got a table which manages user scores, e.g.:
id scoreA scoreB ... scoreX
------ ------- ------- ... -------
1 ... ... ... ...
2 ... ... ... ...
Now i wanted to create a scoreboard which can be sorted by each of the scores (only descending).
However, I can't just query the entries and send them to the client (which renders them with Javascript) as the table contains thousands of entries and sending all of those entries to the client would create unreasonable traffic.
I came to the conclusion that all non-relevant entries (entries which may not show up in the scoreboard as the score is too low) should be discarded on the server-side with the following rule of thumb:
If any of the scores is within the top ten for this specific score keep the entry.
If none of the scores is within the top ten for this specific score discard it.
Now I ran into the question if this can be done efficiently with (My)SQL or if this processing should take place in the php-code querying the database to keep the whole thing performant.
Any help is greatly appreciated!
Go with rows, not columns, for storing scores. Have composite index on userid,score. A datetime column could also be useful. Consider not having the top 10 snapshot table anyway, just the lookup that you suggest. So an order by score desc and Limit 10 in query.
Not that the below reference is the authority on Covering Indexes, but to throw the term out there for your investigation. Good luck.
you can try to use INDEX for specific and performance enhances.
This will query specific results for your kind of problem.
Read about it here
good luck, buddy.
I would first fire a query to obtain the top 10. Then fire the query to get the results, using the top 10 in your sql.
I can't formulate the query until I know what you mean by top 10 - give an example.

Best approach for sortable table with a lot of data

On our web application, the search results are displayed in sortable tables. The user can click on any column and sort the result. The problem is some times, the user does a broad search and gets a lot of data returned. To make the sortable part work, you probably need all the results, which takes a long time. Or I can retrieve few results at a time, but then sorting won't really work well. What's the best practice to display sortable tables that might contain lots of data?
Thanks for all the advises. I will certainly going over these.
We are using an existing Javascript framework that has the sortable table; "lots" of results means hundreds. The problem is that our users are at some remote site and a lot of delay is the network time to send/receive data from the data center. Sorting the data at the database side and only send one page worth of results at a time is nice; but when the user clicks some column header, another round trip is done, which always add 3-4 seconds.
Well, I guess that might be the network team's problem :)
Using sorting paging at the database level is the correct answer. If your query returns 1000 rows, but you're only going to show the user 10 of them, there is no need for the other 990 to be sent across the network.
Here is a mysql example. Say you need 10 rows, 21-30, from the 'people' table:
SELECT * FROM people LIMIT 21, 10
You should be doing paging back on the database server. E.g. on SQL 2005 and SQL 2008 there are paging techniques. I'd suggest looking at paging options for whatever system you're looking at.
What database are you using as there some good paging option in SQL 2005 and upwards using ROW_NUMBER to allow you to do paging on the server. I found this good one on Christian Darie's blog
eg This procedure which is used to page products in a category. You just pass in the pagenumber you want and the number of products on the page etc
CREATE PROCEDURE GetProductsInCategory
(#CategoryID INT,
#DescriptionLength INT,
#PageNumber INT,
#ProductsPerPage INT,
#HowManyProducts INT OUTPUT)
AS
-- declare a new TABLE variable
DECLARE #Products TABLE
(RowNumber INT,
ProductID INT,
Name VARCHAR(50),
Description VARCHAR(5000),
Price MONEY,
Image1FileName VARCHAR(50),
Image2FileName VARCHAR(50),
OnDepartmentPromotion BIT,
OnCatalogPromotion BIT)
-- populate the table variable with the complete list of products
INSERT INTO #Products
SELECT ROW_NUMBER() OVER (ORDER BY Product.ProductID),
Product.ProductID, Name,
SUBSTRING(Description, 1, #DescriptionLength) + '...' AS Description,
Price, Image1FileName, Image2FileName, OnDepartmentPromotion, OnCatalogPromotion
FROM Product INNER JOIN ProductCategory
ON Product.ProductID = ProductCategory.ProductID
WHERE ProductCategory.CategoryID = #CategoryID
-- return the total number of products using an OUTPUT variable
SELECT #HowManyProducts = COUNT(ProductID) FROM #Products
-- extract the requested page of products
SELECT ProductID, Name, Description, Price, Image1FileName,
Image2FileName, OnDepartmentPromotion, OnCatalogPromotion
FROM #Products
WHERE RowNumber > (#PageNumber - 1) * #ProductsPerPage
AND RowNumber <= #PageNumber * #ProductsPerPage
You could do the sorting on the server. AJAX would eliminate the necessity of a full refresh, but there'd still be a delay. Sides, databases a generally very fast at sorting.
For these situations I employ techniques on the SQL Server side that not only leverage the database for the sorting, but also use custom paging to ONLY return the specific records needed.
It is a bit of a pain to implemement at first, but the performance is amazing afterwards!
How large is "a lot" of data? Hundreds of rows? Thousands?
Sorting can be done via JavaScript painlessly with Mochikit Sortable Tables. However, if the data takes a long time to sort (most likely a second or two [or three!]) then you may want to give the user some visual cue that soming is happening and the page didn't just freeze. For example, tint the screen (a la Lightbox) and display a "sorting" animation or text.

Categories