Correct use of socket.io events's callbacks - javascript

I was reading this interesting introductory article about how socket.io's events and callbacks work.
I decided to give first try with something as follows.
First try
server.js
// client is the socket for the client
client.on('foo' , function(callback){
callback("Hello world");
});
client.js
// server is the socket for the server
server.emit('foo', function(msg){
alert(msg);
});
Well, it just so happens that it actually didn't work (the server throws an exception telling callback is not a function). Trying to solve that, I found this answer explaining how to do it the right way. Well, that didn't work either. A few modifications and I got to this...
Second try
server.js
// client is the socket for the client
client.on('foo' , function(name, callback){
callback("Hello world");
});
client.js
// server is the socket for the server
server.emit('foo',{},function(msg){
alert(msg);
});
Well, it works perfectly, but having to add that "name" parameter and that empty hash which I don't use seems to be a a not-so-good solution.
I tried to find the explanation of this in the amazingly incomplete socket.io's documentation, but found no explanation for this beheaviour, which is why I'm asking here.
Another doubt that I have is if it possible to do the same to the other side (i.e., the server sending a callback to the client, and then the callback getting executed in the server), but I haven't tried yet.
TL;DR: Why the first try doesn't work and the second one does? Is there a way to avoid that useless empty hash and name argument?. Does this work the same the both ways? (server→client and client→server).

The empty object doesn't have to be an object, it can be virtually anything, such as a string, or maybe even null (haven't tried that). Also the name parameter isn't specifically a "name" parameter, it's simply whatever data you passed from the client (again, that could be the empty object you are currently using, or anything else). A better generic parameter name might be data. Sure, you could call it a waste, but it's only two characters that you're transferring, and most of the time you'll probably find a use for that data.
The third argument you're passing to emit (a function) is the optional callback parameter, and obviously since it's working, you're using it right.
As to going in the reverse direction, I've never tried that either. It's likely to work, but even if it doesn't, all you have to do is send a unique ID along with each of your push events, and then have the client emit an event back to the server with that ID, and write some code on the server which reassociates that event with the original push event. You could even use a counter on each socket as your push ID, and then use the combination of socket.id and event.id as a unique identifier.

Related

Refactoring websocket code that uses global variables into events / async programming

There's a bit of someone else's code I am trying to add functionality to. It's using websockets to communicate with a server which I will most likely not be able to change (the server runs on a 3$ micro-controller...)
The pattern used, for instance when uploading data to the server, consists in using global variables, then sending a series of messages on the socket, as well as having an 'onmessage' which will handle the response. This seems clumsy, given that it assumes that there is only ever one socket call made at a time (I think the server guarantees that in fact). The messages sent by the server can be multiple, and even figuring out when the messages are finished is fiddly.
I am thinking of making things so that I have a better handle on things, mostly w.r.t. being able to know when the response has arrived (and finished), going to patterns like
function save_file(name, data, callback) {
}
And perhaps at some point I can even turn them into async functions.
So couple of ideas:
- is there some kind of identifier that I could find in the websocket object that might allow me to better string together request and response?
- short of that, what is the right pattern? I started using custom events, that allows me to much better tie the whole process, where I can supply a callback by attaching it to the event, but even doing removeEventListener is tricky because I need to keep reference to every single listener to make sure I can remove them later.
Any advice anyone?

How to prevent invoking 'Meteor.call' from JavaScript Console?

I just noticed that Meteor.call, the concept that prevent user from invoke collection's insert, update, remove method, still able to be invoked from JavaScript console.
For client's example:
// client
...
Meteor.call('insertProduct', productInfo);
...
Here's the server part:
// server
Meteor.methods({
insertProduct: function( productInfo ){
Product.insert(...);
}
})
OK, I know people can't invoke Product.insert() directly from their JavaScript console.
But if they try a little bit more, they'd find out there's Meteor.call() in client's JavaScript from Developer tool's resource tab.
So now they can try to invoke Meteor.call from their console, then try to guessing what should be productInfo's properties.
So I wonder how can we prevent this final activity?
Does Meteor.call done the job well enough?
or I'm missing something important?
Meteor.call is a global function, just like window.alert(). Unfortunately, there is nothing you can do from preventing a user calling Meteor.call. However, you can validate the schema of data and the actual data of what a user is sending. I'd recommend https://github.com/aldeed/meteor-simple-schema (aldeed:simple-schema as the meteor package name) to ensure you don't get garbage data in your project.
As others pointed out, "Meteor.call" can surely be used from the console. The subtle issue here is that there could be a legal user of a meteor app who can in turn do bad things on the server. So even if one checks on the server if the user is legal, that by itself does not guarantee that the data is protected.
This is not an issue only with Meteor. I think all such apps would need to potentially protect against corruption of their data, even through legal users
One way to protect such corruption is by using IIFE (Immediately Invoked Function Expression)
Wrap your module in a IIFE. Inside the closure keep a private variable which stores a unique one time use key (k1). That key needs to be placed there using another route -- maybe by ensuring that a collection observer gets fired in the client at startup. One can use other strategies here too. The idea is to squirrel in the value of k1 from the server and deposit it in a private variable
Then each time you invoke a Meteor.call from inside you code, pass k1 along as one of the parameter. The server in turn checks if k1 was indeed legal for that browser connection
As k1 was stored inside a private variable in the closure that was invoked by the IIFE, it would be quite difficult for someone at the browser console to determine the value of k1. Hence, even though "Meteor.call" can indeed be called from the browser console, it would not cause any harm. This approach should be quite a good deterrent for data corruption
As mentionned by #Faysal, you have several ways to ensure your calls are legit. An easy step to do so is to implement alanning:roles and do role checks from within your method like the following:
Meteor.methods({
methodName: function() {
if (!Roles.userIsInRole(this.userId, 'admin')) {
throw new Meteor.Error(403, 'not authorized);
} else { yourcode });
This way, only admin users can call the method.
Note that you can also check this.connection from within the method and determine if the call comes from the server (this.connection === false) or from the client.
Generally speaking, doing checks and data manipulations from your methods is a nice way to go. Allow/deny are nice to begin with but become really hard to maintain when your collections get heavier and your edge-cases expand.
You cannot block Meteor.call from the console, just like you can't block CollectionName.find().count() from the console. These are global functions in meteor.
But there are simple steps you can take to secure your methods.
Use aldeed:simple-schema to set the types of data your collection can accept. This will allow you to set the specific keys that your collection takes as well as their type (string, boolean, array, object, integer) https://github.com/aldeed/meteor-simple-schema
Ensure that only logged in users can update from your method. Or set global Allow/Deny rules. https://www.meteor.com/tutorials/blaze/security-with-methods && https://www.discovermeteor.com/blog/allow-deny-a-security-primer/
Remove packages insecure and autopublish
The simple combo of schema and allow/deny should do you just fine.
As you know by now that you can't really block calling Meteor.call from Javascript console, what i'd like to add as a suggestion with #Stephen and #thatgibbyguy that, be sure to check your user's role when adding documents into the collection. Simple-Schema will help you prevent inserting/updating garbage data into the collection. and alanning:roles package certainly makes your app secure by controlling who has the permission to write/read/update your collection documents.
Alanning:roles Package

Prevent return until condition is met

I know these types of question come up fairly often, but I need help with a wait-like mechanism in JavaScript. I know setTimeout-based solutions are going to come up, but I'm not sure how to pull it off in my case.
I'm writing an API that uses a WebSocket internally. There's a connect() method that sets up the WebSocket, and I need to make it not return until after the WebSocket is set up. I'd like it to return a value for whether or not the connection was successful, but that's not the main problem.
The issue I'm hitting is that after a user calls connect(), they may call another method that relies on the WebSocket to be properly set up. If it's called too early, an error is thrown stating that the object is not usable.
My current solution is setting a "connected" flag when I've determined a successful connection and in each method checking for it in each method. If it's not connected, I add the method call to a queue that is ran through by the same code that sets the flag. This works, but it introduces that style of code all over my methods and also seems misleading from the user-perspective, since the call of those functions is deferred. Also, if there is other user code that relies on those calls being completed before it gets to them, it won't behave as expected.
I've been racking my brain with how to handle this case. The easiest solution is to just find a way to block returning from connect until after the WebSocket is set up, but that's not really the JavaScript way. The other option was to make them provide the rest of their code in a callback, but that seems like a weird thing to do in this case. Maybe I'm over-thinking it?
Edit: To better illustrate my problem, here's a example of what the user could do:
var client = new Client(options);
client.connect();
client.getServerStatus();
The getServerStatus() method would be using the WebSocket internally. If the WebSocket is not set up yet, the user will get that not usable error.
Todays Javascript does not really work like that unfortunately. In the future (ECMA6) there may be new language features that address this issue more directly. However for now you are stuck with the currently accepted method of handling asynchronous events, which is limited to callbacks. You may also want to explore 'promises' to handle 'callback hell' however you will need a library for this.
And yes it does seem strange to have callbacks everywhere, especially for someone new to web programming, however it is really the only way to go about it at this stage (assuming you want a cross-browser friendly solution).
"Wait" is almost the keyword you are looking for. Actually, it's yield that does this. See e.g. MDN's documentation.
There's a connect() method that sets up the WebSocket, and I need to make it not return until after the WebSocket is set up
That isn't going to happen unless you rewrite the javascript execution engine.
Either the code trying to send data will need to check the socket state (I'd go with encapsulating the socket in a object, supplying a method which sets a member variable on the open/close events and poll the state of that member variable from the external code). Alternatively you could add messages and call backs to a queue and process the queue when the socket connects.

Where are the `req` and `res` coming from?

NOTE: This question has very little jQuery, Drupal, or node.js it's more of a generic question on "how frameworks achieve X, where X is something any of the frameworks I mentioned also provides.
I saw an example node.js code that looks like this:
var http = require('http');
var server = http.createServer();
server.listen(8000);
server.on('request', function(req, res) {
//do something with req and res here
});
There is no obvious place where req and res are coming from. In fact, what does 'request' mean? Where is it supplied from?
I have noticed similar things in jQuery .get() and .post() functions, and looking at the source did not help as much as I would like. I've even seen this being done in Drupal; a function is defined in the theme layer or as a module_hook with specific naming conventions by me, but arguments appear outta nowhere and there is a predictable structure of data (specified in the manual) inside those magic variables.
So what is this technique called, and how does it work. I've heard tell of Dependency Injection... is this it? If it is, could you explain in n00b terms how it is accomplished?
This is particularly confusing because I coded in procedural from the start, and we always know where a variable is coming from and how a function is being called...
The framework constructs the objects for you, and passes them to your callback.
N.B. req and res are just parameter names; you could call them spam and eggs, or hocus and pocus, for all it matters.
In fact, what does request mean? Where is it supplied from?
Whenever you want to access a web site, you're using a special protocol, the hypertext transfer protocol (HTTP). This protocol mainly uses two things:
a question from the client like "what is / on your server?" (the request)
an answer from the server like "it's a text/html, the length is 2000 bytes, and here is the document" (the response).
This request-response model is used directly in node.js, as the server you're using is a HTTP server.
[...] could you explain in n00b terms how it is accomplished?
Do you know what a main-loop or event-loop is? Almost every GUI application has one. It's basically a loop like this:
while(waitForNewEvent(&event)){
handleMsg(&event);
}
This event can be anything, from keyboard input to another software trying to bring your window to front. It can also be something like "are you ready for standby?".
node.js uses such a event-loop in it's server implementation. server.on('request', callback) basically tells node.js that you want callback to be used whenever a request is coming:
while(waitForNewEvent(&event)){
if(event == "request"){
callback(request,&response);
responseToClient(response);
}
}
Intern example
Or even simplier: think of a intern, who's just running around in circles in a building. He's the event-loop. Now in your server room someone tells him that every request should be brought to them. He writes this down and continues on his never-ending tour.
Then someone stands in front of the building and wants to check his bank-account. He simply throws a request into a post box and the intern rushes to the server room and tells the technicians that the specific site has been requested and gives them the necessary information. However, he needs to wait on their response, since their response isn't on his list.
The technicians check the request and find out that the user isn't qualified for the given request(*). They prepare an error message and give it to the intern. He now returns to the front of the building, gives the error message to the first client and is ready for other messages.
(*): At this point they might need to check something in a database, which might take some time. They could tell the intern to come back later and call him if they're ready. In this case the intern could continue his way until the technicians are ready.
You're passing the function to the .on() function. When the event occurs, some internal code invokes the function you passed, and provides the arguments to it.
Here's an example. The server object has a method named on. It takes a name string and a callback function.
It uses setTimeout to wait one second before invoking the callback it was given. When it invokes it, it passes to it the name that was provided, as well as a static message "hi there".
// Think of this as the internal Node code...
var server = { // v---this will be the function you pass
on: function(name, callback) {
setTimeout(function() {
callback(name, "hi there"); // here your function is invoked
}, 1000);
}
};
So here we call .on(), and pass it the name "foo", and the callback function. When the callback is invoked, it will be given the name, and the "hi there" message.
// ...and this is your code.
server.on("foo", function(name, message) {
console.log(name, message);
});
They are short for "Request" and "Response." It is typical of many web frameworks to pass these two objects into a request handling method (action or whatever you want to call it).

Google Closure: Centralized AJAX 'decoder'?

First of all, I must say that I'm very new to Google Closure, but I'm learning :)
Okay, so I'm making a web app that's going to be pretty big, and I thought it would be good to manage all the AJAX requests in one XhrManager. No problem there.
But, is it possible to have some kind of default callback that would check for errors first, display them if necessary and then when it passes, launch the "real" callback? I'm talking about a feature like the decoders in amplify.js. Here's their explanation:
Decoders allow you to parse an ajax response before calling the success or error callback. This allows you to return data marked with a status and react accordingly. This also allows you to manipulate the data any way you want before passing the data along to the callback.
I know it sounds complicated (and it is, really), so the fact that I'm not that good at explaining helps a good deal too, but yeah.
The solution I have in my head right now is creating an object that stores all the 'real callbacks', of which the 'error-checking callback' would execute the correct one after it finished checking, but I feel that's a bit hack-ish and I think there has to be a better way for this.
Since you always have to decode/verify your AJAX data (you never trust data returned from a server now do you?), you're always going to have different decoders/verifiers for different types of AJAX payloads. Thus you probably should be passing the decoder/verifier routine as the AJAX callback itself -- for verifications common to all data types, call a common function inside the callback.
An added benefit of this will be the ability to "translate" unmangled JSON objects into "mangled" JSON objects so that you don't have to do use quoted property access in your code.
For example, assume that your AJAX payload consists of the following JSON object:
{ "hello":"world" }
If you want to refer to the hello property in your code and still pass the Compiler's Advanced Mode, you'll need to do obj["hello"]. However, if you pass in your decoder as the callback, and the first line you do:
var decoded = { hello:response["hello"] };
then do your error checking etc. before returning decoded as the AJAX response. In your code, you can simply do obj.hello and everything will be nicely optimized and mangled by Advanced Mode.

Categories