I have used meteor publish and subscribe method to interact with client and server. Now according to my scenario I am using D3 js to generate a bar chart and as soon as the data is entered in mongo db collection I am using a client side function to generate a bar chart. My issue is that publish and subscribe is too slow to react. And even if I limit the number of documents returned by mongodb, the issue still persists. It is also inconsistent i.e. it will react under 1 second sometimes and other times it will take 4-5 second. Please guide me on what to do and what is wrong with my implementation.
Here is the server side code,
Test = new Mongo.Collection("test")
Meteor.publish('allowedData', function() {
return Test.find({});
})
and here is the client side code,
Test = new Mongo.Collection("test")
Meteor.subscribe('allowedData');
Meteor.setTimeout(function() {
Test.find().observe({
added: function(document){
//something
},
changed:function(){
//something
},
removed:function(){
//something
},
})
From your comments I see that you need a report chart which is reactive. Even though it is your requirement, it is too expensive to have a chart like this. In fact when you system grows bigger, say you have around 10000 documents for one single chart, this kind of chart will crash your server frequently.
To work around this problem, I have two suggestions:
Define a method that returns data for the chart. Set up a job/interval timer in client to call that method periodically. The interval value depends on your need, 10 seconds should be fine for charts. It is not completely reactive this way, you only get the newest data after an interval but it is still better than a slow and crash-frequent system. You could find good modules to manage job/timer here.
Use this Meteor package meteor-publish-join (disclaimer: I am the author), it is made to solve the kind of problem you have: the need to do reactive aggregations/joins on a big data set and still have a good overall performance
Related
I was profiling a "download leak" in my firebase database (I'm using JavaScript SDK/firebase functions Node.js) and finally narrowed down to the "update" function which surprisingly caused data download (which impacts billing in my case quite significantly - ~50% of the bill comes from this leak):
Firebase functions index.js:
exports.myTrigger = functions.database.ref("some/data/path").onWrite((data, context) => {
var dbRootRef = data.after.ref.root;
return dbRootRef.child("/user/gCapeUausrUSDRqZH8tPzcrqnF42/wr").update({field1:"val1", field2:"val2"})
}
This function generates downloads at "/user/gCapeUausrUSDRqZH8tPzcrqnF42/wr" node
If I change the paths to something like this:
exports.myTrigger = functions.database.ref("some/data/path").onWrite((data, context) => {
var dbRootRef = data.after.ref.root;
return dbRootRef.child("/user/gCapeUausrUSDRqZH8tPzcrqnF42").update({"wr/field1":"val1", "wr/field2":"val2"})
}
It generates download at "/user/gCapeUausrUSDRqZH8tPzcrqnF42" node.
Here is the results of firebase database:profile
How can I get rid of the download while updating data or reduce the usage since I only need to upload it?
I dont think it is possible in firebase cloudfunction trigger.
The .onWrite((data, context) has a data field, which is the complete DataSnapshot.
And there is no way to configure not fetching its val.
Still, there are two things that you might do to help reduce the data cost:
Watch a smaller set for trigger. e.g. functions.database.ref("some/data/path") vs ("some").
Use more specific hook. i.e. onCreate() and onUpdate() vs onWrite().
You should expect that all operations will round trip with your client code. Otherwise, how would the client know when the work is complete? It's going to take some space to express that. The screenshot you're showing (which is very tiny and hard to read - consider copying the text directly into your question) indicates a very small amount of download data.
To get a better sense of what the real cost is, run multiple tests and see if that tiny cost is itself actually just part of the one-time handshake between the client and server when the connection is established. That cost might not be an issue as your function code maintains a persistent connection over time as the Cloud Functions instance is reused.
I am using jimp (https://www.npmjs.com/package/jimp) in meteor JS to generate an image server side. In other words I am 'calculating' the pixels of the image using a recursive algorithm. The algorithm takes quite some time to complete.
The issue I am having is that this seems to completely block the meteor server. Users trying to visit the webpage while an image is being generated are forced to wait. The website is therefore not rendered at all.
Is there any (meteor) way to run the heavy recursive algorithm in a thread or something so that it does not block the entire website?
Node (and consequently meteor) runs in a single process which blocks on CPU activity. In short, node works really well when you are IO-bound, but as soon as you do anything that's compute-bound you need another approach.
As was suggested in the comments above, you'll need to offload this CPU-intensive activity to another process which could live on the same server (if you have multiple cores) or a different server.
We have a similar problem at Edthena were we need to transcode a subset of our video files. For now I decided to use a meteor-based solution, because it was easy to set up. Here's what we did:
When new transcode jobs need to happen, we insert a "video job" document in to the database.
On a separate server (we max out the full CPU when transcoding), we have an app which calls observe like this:
Meteor.startup(function () {
// Listen for non-failed transcode jobs in creation order. Use a limit of 1 to
// prevent multiple jobs of this type from running concurrently.
var selector = {
type: 'transcode',
state: { $ne: 'failed' },
};
var options = {
sort: { createdAt: 1 }, limit: 1,
};
VideoJobs.find(selector, options).observe({
added: function (videoJob) {
transcode(videoJob);
}, });
});
As the comments indicate this allows only one job to be called at a time, which may or may not be what you want. This has the further limitation that you can only run it on one app instance (multiple instances calling observe would simultaneously complete the job). So it's a pretty simplistic job queue, but it may work for your purposes for a while.
As you scale, you could use a more robust mechanism for dequeuing and processing the tasks like Amazon's sqs service. You can also explore other meteor-based solutions like job-collection.
I believe you're looking for Meteor.defer(yourFunction).
Relevant Kadira article: https://kadira.io/academy/meteor-performance-101/content/make-your-app-faster
Thanks for the comments and answers! It seems to be working now. What I did is what David suggested. I am running a meteor app on the same server. This app deals with the generating of the images. However, this resulted in the app still eating away all the processing power.
As a result of this I set a slightly lower priority on the generating algorithm with the renice command on the PID. (https://www.nixtutor.com/linux/changing-priority-on-linux-processes/) This works! Any time a user logs into the website the other (client) meteor application gains priority over the generating algorithm. Absolutely no delay at all anymore now.
The only issue I am having now is that whenever the server restarts I somehow have to rerun or run the (re)nice command.
Since I am using meteor up for deployment both apps run the same user and the same 'command': node main.js. I am currently trying to figure out how to run the nice command within the startup script of meteor up. (located at /etc/init/.conf)
I'm trying to test a node server locally for something I eventually want to deploy on DigitalOcean (which is a whole other story). I have successfully setup a local node server with rest endpoints and a self signed cert for the time being. My issue is I want to store data that my user's can retrieve by hitting a rest endpoint. My current thought is that I would have some sort of background task running on the server that is constantly getting new data, so that when some of the more popular queries come through, I have the newest data for them, sanitized and ready to go.
My problem is I can't understand how I am suppose to have a background function running that will call itself over and over again without eventually causing a memory problem. I was looking at Bull or Kue, but I am not sure if those are suitable for my specific needs. I've also never dealt with NoSQL databases before, so Redis is fairly new to me to. Any suggestions or pointers? I'm a little overwhelmed and not sure where to go from here even though I have a general idea of what I am trying to do.
Two options come to mind. If you're using the "hiredis" package you can attain 200k queries / sec on a modern quad core. It's conceivable you could just retrieve the freshest data every time.
The other option involves updating application memory at startup and again at intervals. A recursive call to a function with:
function updateValuesFromRedis(seconds) {
return redishmgetAsyc('keyhash')
.then(function(values) {
return saveValues(values);
})
.finally(function() {
setTimeout(function() {
console.log('Updating...');
updateValuesFromRedis(seconds);
}, 1000 * seconds)
})
.catch(function(error) {
console.error('Error updating values from Redis!', error);
});
}
(function schedule() {
updateValuesFromRedis(60);
})();
I don't even know how to explain this directly, but I'll try.
Intro
I'm building a Phonegap App with angularjs, and I'm trying to stitch up WebSocket messages to update my UI. I used to have services that communicate with the server periodically and change their data accordingly and it was working pretty well. An example:
Service1.js:
var applyUpdate = function (
angular.extend(instance, data);
if (!$rootScope.$$phase) $rootScope.$apply();
};
this.update = function () {
DataProvider.get('getThermostatSettings', {}, applyUpdate, function(){}, true);
}
So basically I was simply calling the "update" function every 5 seconds, receiving data from the server and updating this service. The controllers would then simply access this service, and everything was working.
The problem
The problem now, is that I stitched up a WebSocket Interface written in java, that handles all the websocket implementation for me. I took it from: https://github.com/ziadloo/PhoneGap-Java-WebSocket . It basically registers an Javascript Interface accessible from Javascript to communicate with java.
Everytime I have a change now, I simply push a string from the server via the WebSocket saying that it should update, and then I call the "update" function from the service in my javascript, instead of querying for data periodically, which is just stupid.
The WebSockets are working well. I can see the message coming, I call the update, it fetches everything correctly from the server, the "update" function calls then the "applyUpdate" with the correct values and etc, even the "$rootScope.$apply" gets called.
But all updated data inside the angular Service is not visible! It seems that all these changes are being run in a different thread!?!?. I know javascript is single threaded, but it just seems so.
Let me put this in a better way: I have the impression that as soon as the WebView calls the javascript callback function, I have a different javascript "thread" running, and nothing outside it can be accessed
More about it
I wrote a function that simply outputs a number every 5 seconds. The websocket updates this number. And my output is the following:
N: 1
N: 1
Then after the WebSocket pushes the data with a new Number (2), I get TWO prints:
N: 1
N: 2
N: 1
N: 2
N: 1
N: 2
Anyone has any pointers about this? Anyone tried doing something similar? I'm no angular pro, but it just seems that everything gets messed up as soon as I get a callback from the java interface.
I have already looked at: Angularjs model changes after websocket data push from server and my code looks very similar. The only problem I think is this callback from Java.
Thank you
Update : It's a problem with AngularJS. If I set this variable in a window global variable, everything get's assigned normally. Is it possible that Angular is somehow creating two different $scopes?
Update[2]: Just to be a bit more clear: In the browser, everything works as expected. Only when I run it in the emulator that this thing gets messed up.
It's entirely possible for Angular.js to be making two $scopes. Just see the debugging in this screencast.
It may help to show all the $scopes on the page.
You should also be aware that websockets on mobile aren't the best idea. Crashes and failures seem to be pretty probable, based on that talk (by a Nodejitsu employee with expertise in websockets).
I'm working on a real-time JavaScript Application that requires all changes to a database are mirrored instantly in JavaScript and vise versa.
Right now, when changes are made in JavaScript, I make an ajax call to my API and make the corresponding changes to the DOM. On the server, the API handles the request and finishes up by sending a push using PubNub to the other current JavaScript users with the change that has been made. I also include a changeID that is sequential to JavaScript can resync the entire data set if it missed a push. Here is an example of that push:
{
"changeID":"2857693",
"type":"update",
"table":"users",
"where":{
"id":"32"
},
"set":{
"first_name":"Johnny",
"last_name":"Applesead"
}
}
When JavaScript gets this change, it updates the local storage and makes the corresponding DOM changes based on which table is being changed. Please keep in mind that my issue is not with updating the DOM, but with syncing the data from the database to JavaScript both quickly and seamlessly.
Going through this, I can't help but think that this is a terribly complicated solution to something that should be reasonably simple. Am I missing a Gotcha? How would you sync multiple JavaScript Clients with a MySQL Database seamlessly?
Just to update the question a few months later - I ended up sticking with this method and it works quite well.
I know this is an old question, but I've spent a lot of time working on this exact same problem although for a completely different context. I am creating a Phonegap App and it has to work offline and sync at a later point.
The big revelation for me is that what I really need is a version control between the browser and the server so that's what I made. stores data in sets and keys within those sets and versions all of those individually. When things go wrong there is a conflict resolution callback that you can use to resolve it.
I just put the project on GitHub, it's URL is https://github.com/forbesmyester/SyncIt