Sails.Js - How I do pagination in sails.Js - javascript

I want to create paginated table using sails.js, mongodb and waterline-ORM.
Is there a any specific way to do pagination in sails.js?

http://sailsjs.org/#/documentation/concepts/ORM/Querylanguage.html
Model.find().paginate({page: 2, limit: 10});
Model.find({ where: { name: 'foo' }, limit: 10, skip: 10 });
If you want the pagination to work asynchronously, its very easy to do with JQUERY $$.getJSON and on the server res.json();
Theres a lot of info in waterline and sails docs.

There is also another way.
if you want to fetch data from the front-end, and have turned blueprint on, you can also try:
http://yourDomain.com/ModelName?skip=10&limit=10
Reference:
1.officer site: http://sailsjs.org/#/documentation/reference/blueprint-api/Find.html

You could build a functional paginator with built-in skip & limit query parameters for blueprint routes:
/api/todos?skip=10&limit=10
With this option, you could have dynamically sized page size according to various device sizes - this option you would provide with limit, which is basically your page size. Multiply (page size - 1) by current page number - voila you've got your skip parameter.
As to how to get the number of all items, I haven't found a built-in way to do it, so I've written a little helper middleware (https://github.com/xtrinch/sails-pagination-middleware) to return the total count in the response JSON this way:
{
"results": [
{
/* result here */
},
{
/* another result here */
}
],
"totalCount": 80
}
All you need to do is install the middleware via npm and add it to your middlewares in http.js.
If you need a fully functional example, I've also got an example to-do app with this sort of pagination on github: https://github.com/xtrinch/vue-sails-todo. It's written with vue, but you should get the idea either case.
Note that this answer requires sails 1.x.

I think you can also do it with io:
io.socket.get('/thing', {limit: 30, skip: 30*pageNum}, function(things, jwr) { /*...*/ })

Related

Is it possible to reset specific item in Apollo Client Cache?

I have a query which uses fetchMore and the relayPagination which works fine for lazy loading when using the variables page and perPage, the issue now is I'm trying to refresh the query whenever i update the other variables for filtering like date and type which values can be either debited or credited. It fetches the data, but then appends the incoming data to apollo cache instead of replacing the old data with the new one.
for example in this sample typePolicy
PaginatedBooks: {
fields: {
allBooks: {
merge: relayPagination()
}
}
}
AllProducts: {
// Singleton types that have no identifying field can use an empty
// array for their keyFields.
keyFields: [],
},
I only want to reset PaginatedBooks
I've tried use fetchPolicy of no-cache but this stops pagination and fetchMore from working as i can't merging existing and incoming data. I opted to use client.resetStore(): https://www.apollographql.com/docs/react/api/core/ApolloClient/#ApolloClient.resetStore
but this also refetches other active queries and causes the ui to flicker. So far looking through the documentation and github repo I can't seem to find anything or anyone who has tried to do something similar, so I'm hoping I can get some insight and probably be offered a better solution. Thanks in advance

Cloud Firestore optimized database setup for minimal read/write operations

I'm to get an answer on whether or not the way my database is setup will cause too many recursive reads and therefore exponentially increase the number of read operations.
I currently have a collection of users, inside of each user doc, I have 3 other catalogs, goods, bundles and parts. Each user has a list of parts and a list of bundles, for example.
Each doc in the bundles catalog has an array of maps with a reference to a doc in the parts catalog in each map.
When I query the bundles, I want to also get the details of each part in the bundle. Does this require that I run another onSnapshot?
Here's an example:
database:
users (catalog)
userID
parts
partID1
partID2
partID3
bundles
bundleID1
title: "string",
parts: [
part:"/users/userID/parts/partID1,
qty: 1
]
parts: [
part:"/users/userID/parts/partID2,
qty: 1
]
parts: [
part:"/users/userID/parts/partID3,
qty: 1
]
getting the bundles
initBundle(bid) {
const path = this.database.collection('users').doc('userID').collection('bundles').doc(bid);
path.ref.onSnapshot(bundle => {
const partsArr = [];
bundle.data().parts.forEach(part => {
part.part.onSnapshot(partRef => {
const partObj = {
data: partRef.data(),
qty: part.qty
};
partsArr.push(partObj);
});
});
const bundleObj = {
title: bundle.data().title,
parts: partsArr
};
this.bundle.next(bundleObj);
});
return this.bundle;
}
I'm using Ionic/Angular for this, so when I return the item, it needs to be an array of objects. I'm sort of recreating the object to include each part on this init. As you can see, for each part in the bundles return, I am doing another onSnapshot. This seems incorrect to me.
Something that is dawning on me is that I probably should be making a single call to the user, which in turn returns everything? But how do I get the sub catalogs at that point? I'm not sure how to proceed without racking up a bill!
If you do a nested part.part.onSnapshot(partRef => { listener, be sure to manage those listeners. I know of three common approaches:
Once your outer onSnapshot listener disappears, the nested ones should probably also be stopped (as their data likely isn't needed anymore). This is a fairly simple approach, since you just need one list of listeners for the entire bundle
Alternatively you can manage each "part" listener based on the status of that part in the outer listener, removing the listener for "part1" when that disappears from the bundle. This can be made into a highly efficient solution, but does require (quite some) additional code.
Many developers use get()s for the nested document reads, as that means there is nothing to manage.

Sequelize dynamic seeding

I'm currently seeding data with Sequelize.js and using hard coded values for association IDs. This is not ideal because I really should be able to do this dynamically right? For example, associating users and profiles with a "has one" and "belongs to" association. I don't necessarily want to seed users with a hard coded profileId. I'd rather do that in the profiles seeds after I create profiles. Adding the profileId to a user dynamically once profiles have been created. Is this possible and the normal convention when working with Sequelize.js? Or is it more common to just hard code association IDs when seeding with Sequelize?
Perhaps I'm going about seeding wrong? Should I have a one-to-one number of seeds files with migrations files using Sequelize? In Rails, there is usually only 1 seeds file you have the option of breaking out into multiple files if you want.
In general, just looking for guidance and advice here. These are my files:
users.js
// User seeds
'use strict';
module.exports = {
up: function (queryInterface, Sequelize) {
/*
Add altering commands here.
Return a promise to correctly handle asynchronicity.
Example:
return queryInterface.bulkInsert('Person', [{
name: 'John Doe',
isBetaMember: false
}], {});
*/
var users = [];
for (let i = 0; i < 10; i++) {
users.push({
fname: "Foo",
lname: "Bar",
username: `foobar${i}`,
email: `foobar${i}#gmail.com`,
profileId: i + 1
});
}
return queryInterface.bulkInsert('Users', users);
},
down: function (queryInterface, Sequelize) {
/*
Add reverting commands here.
Return a promise to correctly handle asynchronicity.
Example:
return queryInterface.bulkDelete('Person', null, {});
*/
return queryInterface.bulkDelete('Users', null, {});
}
};
profiles.js
// Profile seeds
'use strict';
var models = require('./../models');
var User = models.User;
var Profile = models.Profile;
module.exports = {
up: function (queryInterface, Sequelize) {
/*
Add altering commands here.
Return a promise to correctly handle asynchronicity.
Example:
return queryInterface.bulkInsert('Person', [{
name: 'John Doe',
isBetaMember: false
}], {});
*/
var profiles = [];
var genders = ['m', 'f'];
for (let i = 0; i < 10; i++) {
profiles.push({
birthday: new Date(),
gender: genders[Math.round(Math.random())],
occupation: 'Dev',
description: 'Cool yo',
userId: i + 1
});
}
return queryInterface.bulkInsert('Profiles', profiles);
},
down: function (queryInterface, Sequelize) {
/*
Add reverting commands here.
Return a promise to correctly handle asynchronicity.
Example:
return queryInterface.bulkDelete('Person', null, {});
*/
return queryInterface.bulkDelete('Profiles', null, {});
}
};
As you can see I'm just using a hard coded for loop for both (not ideal).
WARNING: after working with sequelize for over a year, I've come to realize that my suggestion is a very bad practice. I'll explain at the bottom.
tl;dr:
never use seeders, only use migrations
never use your sequelize models in migrations, only write explicit SQL
My other suggestion still holds up that you use some "configuration" to drive the generation of seed data. (But that seed data should be inserted via migration.)
vv DO NOT DO THIS vv
Here's another pattern, which I prefer, because I believe it is more flexible and more readily understood. I offer it here as an alternative to the accepted answer (which seems fine to me, btw), in case others find it a better fit for their circumstances.
The strategy is to leverage the sqlz models you've already defined to fetch data that was created by other seeders, use that data to generate whatever new associations you want, and then use bulkInsert to insert the new rows.
In this example, I'm tracking a set of people and the cars they own. My models/tables:
Driver: a real person, who may own one or more real cars
Car: not a specific car, but a type of car that could be owned by someone (i.e. make + model)
DriverCar: a real car owned by a real person, with a color and a year they bought it
We will assume a previous seeder has stocked the database with all known Car types: that information is already available and we don't want to burden users with unnecessary data entry when we can bundle that data in the system. We will also assume there are already Driver rows in there, either through seeding or because the system is in-use.
The goal is to generate a whole bunch of fake-but-plausible DriverCar relationships from those two data sources, in an automated way.
const {
Driver,
Car
} = require('models')
module.exports = {
up: async (queryInterface, Sequelize) => {
// fetch base entities that were created by previous seeders
// these will be used to create seed relationships
const [ drivers , cars ] = await Promise.all([
Driver.findAll({ /* limit ? */ order: Sequelize.fn( 'RANDOM' ) }),
Car.findAll({ /* limit ? */ order: Sequelize.fn( 'RANDOM' ) })
])
const fakeDriverCars = Array(30).fill().map((_, i) => {
// create new tuples that reference drivers & cars,
// and which reflect the schema of the DriverCar table
})
return queryInterface.bulkInsert( 'DriverCar', fakeDriverCars );
},
down: (queryInterface, Sequelize) => {
return queryInterface.bulkDelete('DriverCar');
}
}
That's a partial implementation. However, it omits some key details, because there are a million ways to skin that cat. Those pieces can all be gathered under the heading "configuration," and we should talk about it now.
When you generate seed data, you usually have requirements like:
I want to create at least a hundred of them, or
I want their properties determined randomly from an acceptable set, or
I want to create a web of relationships shaped exactly like this
You could try to hard-code that stuff into your algorithm, but that's the hard way. What I like to do is declare "configuration" at the top of the seeder, to capture the skeleton of the desired seed data. Then, within the tuple-generation function, I use that config to procedurally generate real rows. That configuration can obviously be expressed however you like. I try to put it all into a single CONFIG object so it all stays together and so I can easily locate all the references within the seeder implementation.
Your configuration will probably imply reasonable limit values for your findAll calls. It will also probably specify all the factors that should be used to calculate the number of seed rows to generate (either by explicitly stating quantity: 30, or through a combinatoric algorithm).
As food for thought, here is an example of a very simple config that I used with this DriverCar system to ensure that I had 2 drivers who each owned one overlapping car (with the specific cars to be chosen randomly at runtime):
const CONFIG = {
ownership: [
[ 'a', 'b', 'c', 'd' ], // driver 1 linked to cars a, b, c, and d
[ 'b' ], // driver 2 linked to car b
[ 'b', 'b' ] // driver 3 has two of the same kind of car
]
};
I actually used those letters, too. At runtime, the seeder implementation would determine that only 3 unique Driver rows and 4 unique Car rows were needed, and apply limit: 3 to Driver.findAll, and limit: 4 to Car.findAll. Then it would assign a real, randomly-chosen Car instance to each unique string. Finally, when generating association tuples, it uses the string to look up the chosen Car from which to pull foreign keys and other values.
There are undoubtedly fancier ways of specifying a template for seed data. Skin that cat however you like. Hopefully this makes it clear how you'd marry your chosen algorithm to your actual sqlz implementation to generate coherent seed data.
Why the above is bad
If you use your sequelize models in migration or seeder files, you will inevitably create a situation in which the application will not build successfully from a clean slate.
How to avoid madness:
Never use seeders, only use migrations
(Anything you can do in a seeder, you can do in a migration. Bear that in mind as I enumerate the problems with seeders, because that means none of these problems gain you anything.)
By default, sequelize does not keep records of which seeders have been run. Yes, you can configure it to keep records, but if the app has already been deployed without that setting, then when you deploy your app with the new setting, it'll still re-run all your seeders one last time. If that's not safe, your app will blow up. My experience is that seed data can't and shouldn't be duplicated: if it doesn't immediately violate uniqueness constraints, it'll create duplicate rows.
Running seeders is a separate command, which you then need to integrate into your startup scripts. It's easy for that to lead to a proliferation of npm scripts that make app startup harder to follow. In one project, I converted the only 2 seeders into migrations, and reduced the number of startup-related npm scripts from 13 to 5.
It's been hard to pin down, but it can be hard to make sense of the order in which seeders are run. Remember also that the commands are separate for running migrations and seeders, which means you can't interleave them efficiently. You'll have to run all migrations first, then run all seeders. As the database changes over time, you'll run into the problem I describe next:
Never use your sequelize models in your migrations
When you use a sequelize model to fetch records, it explicitly fetches every column it knows about. So, imagine a migration sequence like this:
M1: create tables Car & Driver
M2: use Car & Driver models to generate seed data
That will work. Fast-forward to a date when you add a new column to Car (say, isElectric). That involves: (1) creating a migraiton to add the column, and (2) declaring the new column on the sequelize model. Now your migration process looks like this:
M1: create tables Car & Driver
M2: use Car & Driver models to generate seed data
M3: add isElectric to Car
The problem is that your sequelize models always reflect the final schema, without acknowledging the fact that the actual database is built by ordered accretion of mutations. So, in our example, M2 will fail because any built-in selection method (e.g. Car.findOne) will execute a SQL query like:
SELECT
"Car"."make" AS "Car.make",
"Car"."isElectric" AS "Car.isElectric"
FROM
"Car"
Your DB will throw because Car doesn't have an isElectric column when M2 executes.
The problem won't occur in environments that are only one migration behind, but you're boned if you hire a new developer or nuke the database on your local workstation and build the app from scratch.
Instead of using different seeds for Users and Profiles you could seed them together in one file using sequelizes create-with-association feature.
And additionaly, when using a series of create() you must wrap those in a Promise.all(), because the seeding interface expects a Promise as return value.
up: function (queryInterface, Sequelize) {
return Promise.all([
models.Profile.create({
data: 'profile stuff',
users: [{
name: "name",
...
}, {
name: 'another user',
...
}]}, {
include: [ model.users]
}
),
models.Profile.create({
data: 'another profile',
users: [{
name: "more users",
...
}, {
name: 'another user',
...
}]}, {
include: [ model.users]
}
)
])
}
Not sure if this is really the best solution, but thats how I got around maintaining foreign keys myself in seeding files.

Forward and backward pagination with relay js

Relay cursor connection specification says:
hasNextPage will be false if the client is not paginating with first,
or if the client is paginating with first, and the server has
determined that the client has reached the end of the set of edges
defined by their cursors.
What I understand from that spec is that Relay can only paginate in one
direction; forwards or backwards. That seems reasonable for an infinite
scroll pagination implementation.
But how would one implement a sequential pages navigation similar to the stackoverflow questions page which
can navigate both ways?
Is Relay.js suitable for this kind pagination? I can't rely on hasNextPage and hasPreviousPage field which never has true value unless I paginate with first and last which is also very discouraged.
What I understand from that spec is that relay can only paginate with single direction, forwards or backwards.
Correct. The original design was exactly that, an infinite scroll pagination. It was never intended to be used for bi-directional pagination.
But how about sequential pages like stackoverflow questions page?
This is quite tricky do to with the default implementation of Relay as it stands today because, as stated above, it was't designed for bi-directional navigation.
That said, you can still achieve this, though not through optimal means.
Method 1 - Passing page info through Relay
This is probably the easiest method to implement, though the less optimal one. It involves passing the page info to Relay as a GraphQL argument. This requires a new Relay query each time you navigate to a new page.
Example
Assume you have a view displaying a list of members. You fetch a list of member nodes with memberList. If you only want to display 10 members per "page" and we're on page 3, you could do:
app: () => Relay.QL`
fragment on App {
memberList(page: 3, limit: 10) {
...
}
}`
}
In your backend your SQL query would now look something like:
SELECT id, username, email FROM members LIMIT 20, 10;
Where 20 is the starting record: (page-1)*10 and 10 is the limit (how many items after the start). So the above will fetch the records 21-30.
Note that you would need initial values for page and limit. So your final query would be:
prepareVariables() {
return {
page: 1,
limit: 10,
};
},
fragments: {
app: () => Relay.QL`
fragment on App {
memberList(page: $page, limit: $limit) {
...
}
}`
}
}
And when navigating to a new page we need to update the values:
_goToPage = (pg) => {
this.props.relay.setVariables({
page: pg,
});
}
Method 2 - Manually managing cursors
This is a slightly hacky implementation in that it essentially gives you a bi-directional navigation, even though is not natively supported. This approach is slightly more optimal to the first one, but sacrifices the ability to jump to a page directly.
Note that we cannot use both first and last:
Including a value for both first and last is strongly discouraged, as it is likely to lead to confusing queries and results.
Hence we would construct our query normally. Something like:
app: () => Relay.QL`
fragment on App {
memberList(first: 10, after: $cursor) {
...
}
}`
}
where the initial value for $cursor is null.
As you navigate to the next page and set the value for $cursor to be the cursor of the last member node, you also save the value in a local stack variable.
This way, whenever you navigate back you would simply pop the last cursor from the stack and use that for your after argument. It requires some extra logic in your application, but this is definitely doable.
Based on Method 2 given by #Chris,
This is my solution in VueJS, it's roughly the same for ReactJS as well.
loadPage(type) {
const { endCursor } = this.pageInfo
let cursor = null
if(type === 'previous') {
this.cursorStack = this.cursorStack.slice(0, this.cursorStack.length - 1)
cursor = this.cursorStack[this.cursorStack.length - 1]
}
else {
cursor = endCursor
this.cursorStack = [...this.cursorStack, cursor]
}
this.fetchMore({
variables: { cursor: cursor },
updateQuery: (previousResult, { fetchMoreResult }) => {
if (!fetchMoreResult) return previousResult
return fetchMoreResult
}
})
},
In template:
<button :disabled="!cursorStack.length" #click="loadPage('previous')" />
<button :disabled="!pageInfo.hasNextPage" #click="loadPage('next')" />

Dojo JsonRestStore with array not at root-level of JSON response

Is there a way to configure a JsonRestStore to work with an existing web service that returns an array of objects which is not at the root-level of the JSON response?
My JSON response is currently similar to this:
{
message: "",
success: true,
data: [
{ name: "Bugs Bunny", id: 1 },
{ name: "Daffy Duck", id: 2 }
],
total: 2
}
I need to tell the JsonRestStore that it will find the rows under "data", but I can't see a way to do this from looking at the documentation. Schema seems like a possibility but I can't make sense of it through the docs (or what I find in Google).
My web services return data in a format expected by stores in Ext JS, but I can't refactor years worth of web services now (dealing with pagination via HTTP headers instead of query string values will probably be fun, too, but that's a problem for another day).
Thanks.
While it's only barely called out in the API docs, there is an internal method in dojox/data/JsonRestStore named _processResults that happens to be easily overridable for this purpose. It receives the data returned by the service and the original Deferred from the request, and is expected to return an object containing items and totalCount.
Based on your data above, something like this ought to work:
var CustomRestStore = declare(JsonRestStore, {
_processResults: function (results) {
return {
items: results.data,
totalCount: results.total
};
}
});
The idea with dojo/store is that reference stores are provided, but they are intended to be customized to match whatever data format you want. For example, https://github.com/sitepen/dojo-smore has a few additional stores (e.g. one that handles Csv data). These stores provide good examples for how to handle data that is offered under a different structure.
There's also the new dstore project, http://dstorejs.io/ , which is going to eventually replace dojo/store in Dojo 2, but works today against Dojo 1.x. This might be easier for creating a custom store to match your data structure.

Categories