I'm no expert in these things (I'm used to Laravel), but running one query in Apollo Server is taking ~7.2 seconds, for maybe 300 items total.
The entire resolver is below - as you can see there's essentially no logic aside from running a query. It's just humongously slow.
getMenu: async (parent, {
slug
}, { models, me }) => {
const user = await models.User.findByPk(me.id)
const account = await models.Account.findByPk(user.accountId, {
include: [{
model: models.Venue,
as: 'venues',
include: getVenueIncludes(models)
}],
minifyAliases: true
})
return account.venues.find(venue => venue.slug === slug): null
},
I realise this is rather vague, but does anyone happen to know where I'd look to try and improve this? I understand they're different, but in a Laravel app I can load 10 times that amount (with more nesting) in under a second...
Aha!!
separate: true on your hasMany relationships. Good grief, cut request times from 7.2 seconds to 500ms.
Amazing.
Related
I want this:
const exists = await database.collection('products_info')
.findOne({ date: LADate })
Become this:
const exists = await database
.collection('products_info')
.findOne({ date: LADate })
I have this on my eslintrc:
'newline-per-chained-call': ['error', { ignoreChainWithDepth: 1 }]
ignoreChainWithDepth does not accept less than 1
How can we achieve this?
I happened to be looking for the same, it seems to be a known bug: https://github.com/eslint/eslint/issues/12970, but because they've frozen stylistic rules and are no longer accepting changes, there's nothing to be done except write a plugin to handle it.
I'm trying to load entries from my mongoDB database one at a time. It works for about 400/1000 entries then breaks with the following error:
Executor error during find command :: caused by :: Sort exceeded memory limit of 33554432 bytes, but did not opt in to external sorting. Aborting operation. Pass allowDiskUse:true to opt in.
My Axios get request looks like:
for (let i = 0; i < total; i++) {
await axios({
method: "GET",
url: "/api/entries/",
params: { from: index + i, _limit: 1 },
})
.then((res) => {
setPics((prev) => {
return [...new Set([...prev, ...res.data])];
});
setIndex((prev) => prev++)
setMoreEntries(res.data.length > 0)
})
}
and in my controller my GET function looks like:
const getEntries = asyncHandler(async (req, res) => {
const entries = await Entry.find().allowDiskUse(true).sort({ createdAt: 'desc' }).skip(req.query.from).limit(req.query._limit)
res.status(200).json(entries)
})
Everything works perfectly until about half the entries have loaded, then it breaks. I thought adding allowDiskUse(true) would fix it, but same result.
Edit: I should mention, if I take out .sort({ createdAt: 'desc' }) it works perfectly, but loads in the opposite order.
The syntax itself is fine, I am not quite sure why it's not working for you.
I have two theory's that could be possible:
You are hosting your db on Atlas and are using unqualified instances for this operations as specified in their docs:
Atlas M0 free clusters and M2/M5 shared clusters don't support the allowDiskUse
It seems you're using mongoose, maybe you are using an older version that has an issue with this cursor flag or has different syntax.
Regardless of the source of the issue, I recommend you solve this by either:
Create an index on createdAt, sorting when an index exists does not scan documents into memory, which will make both the query more efficient and solve this issue.
Just use the _id index and sort with { _id: -1}, from your comment about removing the sort operation and getting documents in reverse order, it seems that your createdAt corresponds document creation date ( makes sense ), the ObjectId _id is monotonically increasing which means you can just use that field to sort.
I ended up just doing a backwards for-loop, and using skip instead of sort, like
const getOne = asyncHandler(async (req, res) => {
const entry = await Entry.findOne({}, {}).allowDiskUse(true).skip(req.query.from)
res.status(200).json(entry)
})
Which worked perfectly for me, but Tom's answer is what I was looking for, thank you :)
For a school project, I have to make a quiz app. It is possible to chose a difficulty, a category and an amount of desired questions. The api is a url which can be modified easily by changing some values. For example: https://quizapi.io/api/v1/questions?apiKey=MYAPIKEY&limit=15&difficulty=hard&category=cms. If you would just change the php to code in the url, you would get a max amount of 15 questions on a hard difficulty about HTML and CSS. I think you see where this is going.
However. I have setup my code that the difficulty, category and amount are stored in localstorage and they are fetched when the quiz is started. At the moment, I get the amout of questions I desire but I can't change my difficulty or category because probably Template Literals aren't working in a fetch api.. Maybe someone can give me an idea or maybe I'm making a mistake in my current code
let storageDif = localStorage.getItem("mD");
console.log(storageDif.toString());
let storageCat = localStorage.getItem("mC");
console.log(storageCat);
let geslideVragen = localStorage.getItem("slider");
let MAX_VRAGEN = geslideVragen;
console.log(MAX_VRAGEN);
let vragen = [];
fetch(`https://quizapi.io/api/v1/questions?apiKey=kAFKilHLeEcfLkGE2H0Ia9uTIp1rYHDTIYIHs9qf&limit=15&difficulty=hard&category=${storageCat}`)
.then((res) => {
return res.json();
})
.then((loadedQuestions) => {
for (let i = 0; i < MAX_VRAGEN; i++) {
vragen = loadedQuestions;
console.log(vragen[i].question);
};
startGame();
})
.catch( err => {
console.error(err);
});
I'm sure you found out by now that you're only interpolating the category. To get it to be correctly, you'd need to do this:
`https://quizapi.io/api/v1/questions?apiKey=kAFKilHLeEcfLkGE2H0Ia9uTIp1rYHDTIYIHs9qf&limit=${MAX_VRAGEN}&difficulty=${storageDif}&category=${storageCat}`
That being said, you should never expose your API keys this way, because especially for cloud services, it can easily cost you over 5 digits in a single day if someone decided to use it for their own means. There are plenty of scrapers that scour GitHub for exposed API keys for illegitimate uses.
Also, should apply a check to make sure all values are present using an if() statement so that it doesn't fetch anything if a value is undefined.
I finally got RANGE_ADD mutations to work. Now I'm a bit confused and worried about their performance.
One outputField of a RANGE_ADD mutation is the edge to the newly created node. Each edge has a cursor. cursorForObjectInConnection()(docs) is a helper function that creates that cursor. It takes an array and a member object. See:
newChatroomMessagesEdge: {
type: chatroomMessagesEdge,
resolve: async ({ chatroom, message }) => {
clearCacheChatroomMessages();
const messages = await getChatroomMessages(chatroom.id);
let messageToPass;
for (const m of messages) {
if (m.id === message.id) {
messageToPass = m;
}
}
return {
cursor: cursorForObjectInConnection(messages, messageToPass),
node: messageToPass,
};
},
},
plus a similar edge for user messages.
Now this is what confuses me. Say you want to make a chatroom app. You have a userType, a chatroomTypeand a messageType. Both the userType and the chatroomType have a field messages. It queries for all of the user's or chatroom's messages respectively and is defined as a relay-connection that points to messageType. Now, each time a user sends a new message, two new RANGE_ADD mutations are committed. One that creates an edge for the chatroom's messages and one that creates an edge for the user's messages. Each time, because of cursorForObjectInConnection(), a query for all of the user's messages and one for all of the chatroom's messages is sent to a database. See:
const messages = await getChatroomMessages(chatroom.id);
As one can imagine, there occur lots of "message-sent-mutations" in a chatroom and the number of messages in a chatroom grows rapidly.
So here is my question: Is it really necessary to query for all of the chatroom messages each time? Performance-wise, this seems like an awful thing to do.
Sure, although I did not look into it yet, there are optimistic updates I can use to ease the pain client-side - a sent message gets displayed immediately. But still, the endless database queries continue.
Also, there is Dataloader. But as far as I understand Dataloader, it is used as a per request batching and caching mechanism - especially in conjunction with GraphQL. Since each mutation should be a new request, Dataloader does not help on that front either it seems.
If anyone can shed some light on my thoughts and worries, I'd be more than happy :)
I run the following query on my whole data set(approx. 3 million documents) in mongoDB to change user IDs that are strings into ints. This query does not seem to complete:
var cursor = db.play_sessions.find()
while (cursor.hasNext()) {
var play = cursor.next();
db.play_sessions.update({_id : play._id}, {$set : {user_id : new NumberInt(play.user_id) }});
}
I run this query on the same data set and it returns relatively quickly:
db.play_sessions.find().forEach(function(play){
if (play.score && play.user_id && play.user_attempt_no && play.game_id && play.level && play.training_session_id) {
print(play.score,",",play.user_id,",",play.user_attempt_no,",",play.game_id,",",play.level,",",parseInt(play.training_session_id).toFixed());
} else if (play.score && play.user_id && play.user_attempt_no && play.game_id && play.level) {
print(play.score,",",play.user_id,",",play.user_attempt_no,",",play.game_id,",",play.level);
};
});
I understand I am writing to the database in the first query but why does the first query never seem to return, while the second does so relatively quickly? Is there something wrong with the code in the first query?
Three million documents is quite a lot of documents so the whole operation is going to take a while. But the main thing here to consider is that you are asking to both "send" data to the database and "receive" a acknowledged write response ( because that is what happens ) three million times. That alone is a lot more waiting in between operations than simply iterating a cursor.
Another reason here is that it is very likely that you are running MongoDB 2.6 or a greater revision. There is a core difference between earlier versions and versions upward to how this code is processed in the shell. The core of this is the Bulk Operations API which contains methods that are actually used by all the shell helpers for all interaction with the database.
In prior versions, in such a "loop" operation the "write concern" acknowledgement was not done in this context for each iteration. The way it is done now ( since the helpers actually use the Bulk API ) the acknowledgement is returned for every iteration. This slows things down a lot. Unless of course you use the Bulk operations directly.
So to "re-cast" your values in modern versions, do this instead:
var bulk = db.play_sessions.initializeUnorderedBulkOp();
var count = 0;
db.play_sessions.find({ "user_id": { "$type": 2 } }).forEach(function(doc) {
bulk.find({ "_id": doc._id }).updateOne({
"$set": { "user_id": NumberInt(doc.user_id) }
});
count++;
if ( count % 10000 == 0 ) {
bulk.execute();
bulk = db.play_sessions.initializeUnorderedBulkOp();
}
});
if ( count % 10000 != 0 )
bulk.execute();
Bulk operations send all of their "batch" in a single request. In actual fact the underlying driver breaks this up into individual batch requests of 1000 items, but 10000 is a reasonable number without taking up too much memory in most cases.
The other optimization here is that the only items selected by the query are those that are presently a "string" by the $type operator to identify this. This could possibly speed things up if some of the data is already converted.
If indeed you have an earlier version of MongoDB and you are running this conversion on a collection that is not on a sharded cluster, then your other option is to use db.eval().
Take care to actually read the content on that link though. This is not a good idea, and you should never use this in production and only as a last resort for one off conversion. The code is submitted as JavaScript and actually run on the server. As a result a high level of locking can and will occur while running. You have been warned:
db.eval(function() {
db.play_sessions.find({ "user_id": { "$type": 2 } }).forEach(function(doc) {
db.play_sessions.update(
{ "_id": doc._id },
{ "$set": { "user_id": NumberInt( doc.user_id ) } }
);
});
});
Use with caution and prefer the "batch" processing or even the basic loop on a machine as close as possible in network terms to the actual database server. Preferably on the server.
Also where version permits and you still deem the eval case necessary, try to use the Bulk operations methods anyway, as that is the greatly optimized approach.