I am having an issue with a lightweight gRPC server I'm writing in NodeJS. I'm referencing the documentation here. I have been able to compile my proto files representing messages and services, and have successfully stood up a gRPC server with a server-side stream method I am able to trigger via BloomRPC.
I have a proto message called parcel, which has one field: parcel_id. I want this method to stream a parcel of data every second. My first rudimentary pass at this was a loop that executes every second for a minute, and applies a new parcel via call.write(parcel). I've included the method below, it executes with no errors when I invoke it via gRPC.
/**
* Implements the updateParcel RPC method.
* Feeds new parcel to the passed in "call" param
* until the simulation is stopped.
*/
function updateParcels(call) {
console.log("Parcels requested...");
// Continuously stream parcel protos to requester
let i = 0;
let id = 0;
while(i < 60){
// Create dummy parcel
let parcel = new messages.Parcel();
parcel.setParcelId(id);
id++;// Increment id
// Write parcel to call object
console.log("Sending parcel...");
call.write(parcel);
// Sleep for a second (1000 millis) before repeating
sleep(1000);
}
call.end();
}
My issue is that, although I am able to call my methods and receive results, the behavior is that I receive the first result immediately on the client (for both NodeJS client code and BloomRPC calls), but receive the last 59 results all at once only after the server executes call.end(). There are no errors, and the parcel objects I receive on the client are accurate and formatted correctly, they are just batched as described.
How can I achieve a constant stream of my parcels in real time? Is this possible? I've looked but can't tell for sure - do gRPC server-side streams have a batching behavior by default? I've tried my best to understand the gRPC documentation, but I can't tell if I'm simply trying to force gRPC server-side streams to do something they weren't designed to do. Thanks for any help, and let me know if I can provide any more information, as this is my first gRPC related SO question and I may have missed some relevant information.
It could be unrelated to gRPC, but to the sleep implementation used there.
The default one provided by node is a promise, so for this to work you probably have to declare the function as async and call await sleep(1000);.
Related
I'm writing a Discord bot and having it schedule a cron job when it starts up using node-cron. I've defined it as follows (very basic so far):
cron.schedule('15 1,3,5,7,9,11,13,15,17,19,21,23 * * *', () => {
GameUpdater.sendGameUpdatesToLeagueChannels(this.guilds);
});
The problem is, Discord bots will often reconnect which will rerun the portion of the code where this has to be. I'm considering having a check of an external value, perhaps in my database, that manages whether or not it's run recently or is already running or something like that, but I was wondering if there was a proper way to make a cron job unique using node-cron?
First, you need somewhere to save the name of the scheduled task, which defaults to a UUID V4 but can be provided as an option using the 3rd argument of the schedule method you are calling above.
With a reference to this name, either from a database or a constant that is hard-coded in the script you have wrote, you can use cron.getTasks to list all of the scheduled tasks to ensure that the current task doesn't already exist before scheduling it.
If you are referencing a hard-coded name, then calling the schedule method will replace the existing scheduled task, rather than creating a new task.
UPDATE
To set the name of the scheduled task, simply pass in an options object as a third argument to the schedule method you are calling, with a name for your task.
Example:
cron.schedule('15 1,3,5,7,9,11,13,15,17,19,21,23 * * *', () => {
GameUpdater.sendGameUpdatesToLeagueChannels(this.guilds);
}, { name: "example-task-1234" });
if the cron schedule is fired on startup from Client.on('ready') have your cronjob to be scheduled at a separate event Client.once('ready'), this way it won't fire the routine again when the bot reconnects. Only if the entire shard/cluster process is terminated and restarted.
You can you use logger to keep track of restaring or starting time of your server.
You can use winston Click HERE
It will help to generate your server log so that you can read it before starting of your server, so that you can take your action. Like say adjust the time etc
I was profiling a "download leak" in my firebase database (I'm using JavaScript SDK/firebase functions Node.js) and finally narrowed down to the "update" function which surprisingly caused data download (which impacts billing in my case quite significantly - ~50% of the bill comes from this leak):
Firebase functions index.js:
exports.myTrigger = functions.database.ref("some/data/path").onWrite((data, context) => {
var dbRootRef = data.after.ref.root;
return dbRootRef.child("/user/gCapeUausrUSDRqZH8tPzcrqnF42/wr").update({field1:"val1", field2:"val2"})
}
This function generates downloads at "/user/gCapeUausrUSDRqZH8tPzcrqnF42/wr" node
If I change the paths to something like this:
exports.myTrigger = functions.database.ref("some/data/path").onWrite((data, context) => {
var dbRootRef = data.after.ref.root;
return dbRootRef.child("/user/gCapeUausrUSDRqZH8tPzcrqnF42").update({"wr/field1":"val1", "wr/field2":"val2"})
}
It generates download at "/user/gCapeUausrUSDRqZH8tPzcrqnF42" node.
Here is the results of firebase database:profile
How can I get rid of the download while updating data or reduce the usage since I only need to upload it?
I dont think it is possible in firebase cloudfunction trigger.
The .onWrite((data, context) has a data field, which is the complete DataSnapshot.
And there is no way to configure not fetching its val.
Still, there are two things that you might do to help reduce the data cost:
Watch a smaller set for trigger. e.g. functions.database.ref("some/data/path") vs ("some").
Use more specific hook. i.e. onCreate() and onUpdate() vs onWrite().
You should expect that all operations will round trip with your client code. Otherwise, how would the client know when the work is complete? It's going to take some space to express that. The screenshot you're showing (which is very tiny and hard to read - consider copying the text directly into your question) indicates a very small amount of download data.
To get a better sense of what the real cost is, run multiple tests and see if that tiny cost is itself actually just part of the one-time handshake between the client and server when the connection is established. That cost might not be an issue as your function code maintains a persistent connection over time as the Cloud Functions instance is reused.
I have read several posts about doing "sleep" or "wait" in Javascript. However, they all use client side Javascript. I need to do this in a Scheduled NetSuite SuiteScript. I ahve tried setTimeout() and it tells me it cannot find the function (as well as window.setTimeout()).
If I have to do an infinite loop with an if condition that gives me the delay I want, i will do that but it is less than ideal. I want to know if there is a simple "sleep" or "wait" kind of way of doing this to delay code from executing.
My purpose is because my code deletes records. In my current setup, if 2 of these records are deleted too close to one another NS throws "unexpected error" and stops. if there is a long enough pause in between, then it works. I am trying to automate this so i don't sit here deleting records all day.
The posts I have checked so far:
How to create javascript delay function
JavaScript.setTimeout
JavaScript sleep/wait before continuing
What is the JavaScript version of sleep()?
Mine is not a duplicate to any of those as they all assume Client side and are not specific to NetSuite SuiteScript. Thanks!
Doing a loop based wait might be an overhead, as at times NetSuite might warn of number of script statement.
Another way of doing a sleep can be using nlapiRequestURL() and writing a service on your web server, as it is blocking and synchronous on server side. You can write a HTTP service and in your web server do the sleep job and then respond to the client.
If you are deleting records in a scheduled script then those run serially. Have you tried wrapping the nlapiDeleteRecord call in a try-catch?
If you are getting an error then is a User Event or workflow script running and throwing the error?
As far as a wait I've done the following. It runs the risk of throwing a too many instructions error but avoids a database call that would eat governance. If you can find an nice API call with 0 governance cost that eats some time that would be better but this worked well enough for me.
function pause(waitTime){ //seconds
try{
var endTime = new Date().getTime() + waitTime * 1000;
var now = null;
do{
//throw in an API call to eat time
now = new Date().getTime(); //
}while(now < endTime);
}catch (e){
nlapiLogExecution("ERROR", "not enough sleep");
}
}
So I'm working on this tutorial to learn how to make a RESTful API in Node.js, and one part of it suddenly got me a bit worried. In the app an object is instantiated to handle the RESTful requests called TaskRepository().
As per Gist related to the tutorial, you'll see this code snippet:
var taskRepository = new TaskRepository();
My question is, will this instantiate one TaskRepository() object per user? In that case, isn't there a chance you'll run rather quickly out of memory if there's high enough traffic?
What's best practice here?
Also, if that is the case, how would you get around it programmatically to avoid a future traffic jam?
In that specific API, there is an API to create a task and it returns a task ID. That task will exist until some future API call refers to that specific ID and uses the delete operation.
The TaskRepository is per server (created once for your server), not per-user.
These tasks are not particularly per-user, but when you create a task and return it to the requestor, it is only that requestor that will likely know the ID and use it, though since this example does not create random IDs, they are predictable so anyone could reference a specific ID.
If you do not delete tasks after they are created, they will accumulate over time. Usually, something like this would create some sort of inactivity timeout and would automatically delete tasks if they are not used in some period of time (say 30 minutes).
In the case of the Gist you linked, the answer is no. new TaskRepository() is called once when the server is setup (right next to creating the var app = express and then that one instance will be shared and used for all requests.
Now, if you had called new TaskRepository() inside a route (app.get('stuff/etc', function () {}))) then you'd be correct. It would create a new instance of TaskRepository per HTTP request.
NOTE: This question has very little jQuery, Drupal, or node.js it's more of a generic question on "how frameworks achieve X, where X is something any of the frameworks I mentioned also provides.
I saw an example node.js code that looks like this:
var http = require('http');
var server = http.createServer();
server.listen(8000);
server.on('request', function(req, res) {
//do something with req and res here
});
There is no obvious place where req and res are coming from. In fact, what does 'request' mean? Where is it supplied from?
I have noticed similar things in jQuery .get() and .post() functions, and looking at the source did not help as much as I would like. I've even seen this being done in Drupal; a function is defined in the theme layer or as a module_hook with specific naming conventions by me, but arguments appear outta nowhere and there is a predictable structure of data (specified in the manual) inside those magic variables.
So what is this technique called, and how does it work. I've heard tell of Dependency Injection... is this it? If it is, could you explain in n00b terms how it is accomplished?
This is particularly confusing because I coded in procedural from the start, and we always know where a variable is coming from and how a function is being called...
The framework constructs the objects for you, and passes them to your callback.
N.B. req and res are just parameter names; you could call them spam and eggs, or hocus and pocus, for all it matters.
In fact, what does request mean? Where is it supplied from?
Whenever you want to access a web site, you're using a special protocol, the hypertext transfer protocol (HTTP). This protocol mainly uses two things:
a question from the client like "what is / on your server?" (the request)
an answer from the server like "it's a text/html, the length is 2000 bytes, and here is the document" (the response).
This request-response model is used directly in node.js, as the server you're using is a HTTP server.
[...] could you explain in n00b terms how it is accomplished?
Do you know what a main-loop or event-loop is? Almost every GUI application has one. It's basically a loop like this:
while(waitForNewEvent(&event)){
handleMsg(&event);
}
This event can be anything, from keyboard input to another software trying to bring your window to front. It can also be something like "are you ready for standby?".
node.js uses such a event-loop in it's server implementation. server.on('request', callback) basically tells node.js that you want callback to be used whenever a request is coming:
while(waitForNewEvent(&event)){
if(event == "request"){
callback(request,&response);
responseToClient(response);
}
}
Intern example
Or even simplier: think of a intern, who's just running around in circles in a building. He's the event-loop. Now in your server room someone tells him that every request should be brought to them. He writes this down and continues on his never-ending tour.
Then someone stands in front of the building and wants to check his bank-account. He simply throws a request into a post box and the intern rushes to the server room and tells the technicians that the specific site has been requested and gives them the necessary information. However, he needs to wait on their response, since their response isn't on his list.
The technicians check the request and find out that the user isn't qualified for the given request(*). They prepare an error message and give it to the intern. He now returns to the front of the building, gives the error message to the first client and is ready for other messages.
(*): At this point they might need to check something in a database, which might take some time. They could tell the intern to come back later and call him if they're ready. In this case the intern could continue his way until the technicians are ready.
You're passing the function to the .on() function. When the event occurs, some internal code invokes the function you passed, and provides the arguments to it.
Here's an example. The server object has a method named on. It takes a name string and a callback function.
It uses setTimeout to wait one second before invoking the callback it was given. When it invokes it, it passes to it the name that was provided, as well as a static message "hi there".
// Think of this as the internal Node code...
var server = { // v---this will be the function you pass
on: function(name, callback) {
setTimeout(function() {
callback(name, "hi there"); // here your function is invoked
}, 1000);
}
};
So here we call .on(), and pass it the name "foo", and the callback function. When the callback is invoked, it will be given the name, and the "hi there" message.
// ...and this is your code.
server.on("foo", function(name, message) {
console.log(name, message);
});
They are short for "Request" and "Response." It is typical of many web frameworks to pass these two objects into a request handling method (action or whatever you want to call it).