Backbone.js has a neat feature where you are able to sync changes back to your sever using standard HTTP verbs.
For example you may have a model object and some code which executes a get:
var coolModel = Backbone.Model.extend({url:'mysite/mymodel'});
var myCoolModel = new coolModel();
myCoolModel.fetch({error:processError});
Under the case where the server returns a 4XX or 5XX the error function 'processError' is run, which is great, you are able to process the error in which ever way suits.
As backbone.js uses jQuery to perform the GET, Jquery reports the error, which it is. The 4XX is a valid error which should be recovered from, my client side app is not broken, it just needs to behave slightly differently.
My question is - is it considered bad practice to have this error raised from jQuery displayed in the browsers console window or status bar? Should I be suppressing this error somehow so that users in production don't see an error reported by the browser when the error is recoverable? Or is it correct in the land of HTTP to leave it as is?
Handling errors in Backbone is a really interesting topic and one I hope to write about at some point. It's very nice to visually indicate errors to your users in a non-obtrusive manner. Some things to consider are:
Your users are not looking at the status bar or developer tools
Your users are expecting specific behavior from your application
When your application does not behave correctly visual problem indicators are important
I'd recommend considering how the failure impacts the user's intention. For instance if they are fetching data for the first page and that data is not returned correctly, you will need to handle the error by displaying a failure of data retrieved (or even better fall back on previously loaded data from a cache... it it exists). If the intention is to save an item and the error code returned is 400 that is definitely not a success and should be indicated that the user should retry saving again, (or perhaps attempt a re-save on an interval).
You can silently ignore errors and not indicate them, but your users will get confused and it will lead to unexpected problems. I can't preach to use perfect error handling, because I'm still getting better at it myself.
I would say HTTP status codes are there for a reason, entirely valid if the reason for them is valid, so yes, just use them. However: 400 means Bad Request, which means the input is syntactically wrong. You should send a more appropriate header (like 409 for a conflict, 428 for a failed precondition, etc.). I'm struggling to come up with a project with a valid use for 418 I'm a teapot, but I will succeed some day..
Anybody interested in the inner workings of your site could look at the console, but there should be no problem with this, nor should you overly pander to a clean look there, just make sure your own process flow is sound.
Related
So I have an AngualrJS application that is acting as a single-page-application (SPA). This SPA is using an existing Rails API to make xhr requests which returns entries from the database as JSON.
I am currently trying to write some code to handle possible server responses. The first one that came to mind was a request to delete an entry with a one-to-many relationship. For example:
def Library < ActiveRecord::Base
has_many books
end
Would be my Rails model. In my case, I don't allow the user to destroy libraries if they currently have books. The controller will respond with some sort of appropriate status, possibly a flag in the response header. The view then responds with a message to the user that the library still has books and cannot be deleted until the books are removed.
So my question is about exception handling. If I am to follow the oft paraphrased:
Exceptions are for exceptional cases.
I am lead to the conclusion that this should not be an exceptional case because it is expected that the user will occasionally try to delete a library with books. Furthermore, the program accounts for this and has a message prepared for this case. Am I right to not consider this an exception?
For those of you that are deep into AngularJS, when do you actually use exceptions in practice?
Also, I think it's important to note that because of the asynchronous nature of the xhr requests I am using promise-chaining to handle the exceptions with .then, .catch, $q.reject etc. Which I'm still new to and don't fully understand its relationship to exceptions.
I also would not handle this as an exception. My method would be to log it with warning level on server side and show the error message in the view on client side.
I would not consider me an expert, but I think exceptions should never appear as feedback on client side if it is a productive application, because the user should not be forced to look into the console to search for errors.
If the error is a considered one like yours just show an error message.
If it is anchored in your methods and only appears in cases which should not happen during an usual use, you should throw an exception so you know what happened. But you should also show some feedback in the view for the user.
I'm new to javascript, Node.js, and 0MQ, so n00b * 3 here.
I want to set up a simple request and reply, but I want the client to wait for a response before sending out the next request.
The zguide goes over this, but the Node.js version does not behave like the C version (which is what I want).
I realize I'm butting up against a paradigm shift here in how I look at the problem, but I still feel like I should be able to do this. Can I make a recv call (or something similar) in the client?
You're right, the node.js version of ZMQ doesn't behave the way you would expect it to if you're coming from the C version, but it actually is behaving according to the rules, it's just adding in a bit of its own sauce to the mix.
Specifically, the C bindings will throw an error if you try to break the strict REQ/REP/REQ/REP cycle. In node.js, it will cache the out-of-order message until the previous response comes back, and then send out that new message... so you're still getting REQ/REP/REQ/REP, in order, and you can choose to send a message whenever you want without an error.
This is probably a poor design choice on the part of the node ZMQ binding authors, first of all because it's confusing to new users such as yourself, and second of all if you're using REQ/REP you'd probably prefer a hard failure if you go out of order, otherwise you'd be using a different socket type.
If you look at the beginning of the Node.js documentation for domains it states:
By the very nature of how throw works in JavaScript, there is almost never any way to safely "pick up where you left off", without leaking references, or creating some other sort of undefined brittle state.
Again in the code example it gives in that first section it says:
Though we've prevented abrupt process restarting, we are leaking resources like crazy
I would like to understand why this is the case? What resources are leaking? They recommend that you only use domains to catch errors and safely shutdown a process. Is this a problem with all exceptions, not just when working with domains? Is it a bad practice to throw and catch exceptions in Javascript? I know it's a common pattern in Python.
EDIT
I can understand why there could be resource leaks in a non garbage collected language if you throw an exception because then any code you might run to clean up objects wouldn't run if an exception is thrown.
The only reason I can imagine with Javascript is if throwing an exception stores references to variables in the scope where the exception was thrown (and maybe things in the call stack), thus keeping references around, and then the exception object is kept around and never gets cleaned up. Unless the leaking resources referred to are resources internal to the engine.
UPDATE
I've Written a blog explaining the answer to this a bit better now. Check it out
Unexpected exceptions are the ones you need to worry about. If you don't know enough about the state of the app to add handling for a particular exception and manage any necessary state cleanup, then by definition, the state of your app is undefined, and unknowable, and it's quite possible that there are things hanging around that shouldn't be. It's not just memory leaks you have to worry about. Unknown application state can cause unpredictable and unwanted application behavior (like delivering output that's just wrong -- a partially rendered template, or an incomplete calculation result, or worse, a condition where every subsequent output is wrong). That's why it's important to exit the process when an unhandled exception occurs. It gives your app the chance to repair itself.
Exceptions happen, and that's fine. Embrace it. Shut down the process and use something like Forever to detect it and set things back on track. Clusters and domains are great, too. The text you were reading is not a caution against throwing exceptions, or continuing the process when you've handled an exception that you were expecting -- it's a caution against keeping the process running when unexpected exceptions occur.
I think when they said "we are leaking resources", they really meant "we might be leaking resources". If http.createServer handles exceptions appropriately, threads and sockets shouldn't be leaked. However, they certainly could be if it doesn't handle things properly. In the general case, you never really know if something handles errors properly all the time.
I think they are wrong / very misleading when they said "By the .. nature of how throw works in JavaScript, there is almost never any way to safely ..." . There should not be anything about how throw works in Javascript (vs other languages) that makes it unsafe. There is also nothing about how throw/catch works in general that makes it unsafe - unless of course you use them wrong.
What they should have said is that exceptional cases (regardless of whether or not exceptions are used) need to be handled appropriately. There are a few different categories to recognize:
A. State
Exceptions that occur while external state (database writing, file output, etc) is in a transient state
Exceptions that occur while shared memory is in a transient state
Exceptions where only local variables might be in a transient state
B. Reversibility
Reversible / revertible state (eg database rollbacks)
Irreversible state (Lost data, unknown how to reverse, or prohibitive to reverse)
C. Data criticality
Data can be scrapped
Data must be used (even if corrupted)
Regardless of the type of state you're messing with, if you can reverse it, you should do that and you're set. The problem is irreversible state. If you can destroy the corrupted data (or quarantine it for separate inspection), that is the best move for irreversible state. This is done automatically for local variables when an exception is thrown, which is why exceptions excel at handling errors in purely functional code (ie functions with no possible side-effects). Likewise, any shared state or external state should be deleted if that's acceptable. In the case of shared state, either throw exceptions until that shared state becomes local state and is cleaned up by unrolling of the stack (either statically or via the GC), or restart the program (I've read people suggesting the use of something like nodejitsu forever). For external state, this is likely more complicated.
The last case is when the data is critical. Well, then you're gonna have to live with the bugs you've created. Everyone has to deal with bugs, but its the worst when your bugs involve corrupted data. This will usually require manual intervention (reconstructing the lost/damaged data, selectively pruning, etc) - exception handling won't get you the whole way in the last case.
I wrote a similar answer related to how to handle mid-operation failure in various cases in the context of multiple updates to some data storage: https://stackoverflow.com/a/28355495/122422
Taking the sample from the node.js documentation:
var d = require('domain').create();
d.on('error', function(er) {
// The error won't crash the process, but what it does is worse!
// Though we've prevented abrupt process restarting, we are leaking
// resources like crazy if this ever happens.
// This is no better than process.on('uncaughtException')!
console.log('error, but oh well', er.message);
});
d.run(function() {
require('http').createServer(function(req, res) {
handleRequest(req, res);
}).listen(PORT);
});
In this case you are leaking connections when an exception occurs in handleRequest before you close the socket.
"Leaked" in the sense that you finished processing the request without cleaning up afterwards. Eventually the connection will time out and close the socket, but if your server is under high load it may run out of sockets before that happens.
Depending on what you do in handleRequest you may also be leaking file handles, database connections, event listeners, etc.
Ideally you should handle your exceptions so you can clean up after them.
I'm having a problem with a Java JSF application: In a certain case, a user action causes an Ajax HTTP request that updates the UI correctly, but then immediately a second request is triggered, causing a second, incorrect update.
How can I find out (preferably using Firebug) where exactly that second request is triggered? There's a lot of minified framework JS code, so I don't know where to place breakpoints. Setting the form onsubmit handler to console.trace did not help, I suppose because these are independant Ajax requests.
While trying out the suggestions in the answers, I found that Firebug already has exactly what I need out of the box: the Console tab displays all requests, and for Ajax requests it shows the file and line number where they originate, which tells me where to set my breakpoint...
Using Firebug you can set Breakpoints on DOM (HTML) Mutation Events if you have some HTML changes in your UI update.
If the framework abstracts the AJAX requests, you should be able to trace the calls to the abstractions. For example, jQuery allows this through its global AJAX event handlers.
Another, more robust way to tackle the problem would be to replace the XHR object and trace calls made to it (i.e. if the framework does not provide the above abstraction or if the calls that you want to use don't use the abstraction). Just replace the GM_log with console.trace in the script at the end of the page and include it in the page you're testing.
What I personally have done in these case is using an HTTP proxy that can put a request or response 'on hold'. E.g. Burp Proxy (this is actually a security tool, but it works great for debugging purposes)
Start up the proxy and configure your browser to use it. Navigate to the page where the roque requests originates from and activate intercepting requests (this might take some practice as Burp Proxy can be a rather complicated tool).
Now do the user action, if all goes well the proxy intercepts it and waits for your confirmation to let it go through. Do this. Then you'll probably see the second request coming and being intercepted by the proxy as well. Don't let this one through, but instead switch to Firebug and suspend into the debugger. Hopefully you'll then be able to see where it originates from. Edit: on second thoughts, the asynchronous nature of AJAX probably means you won't be able to see what the exact spot is via this method anyway... :(
At least you can also configure it to intercept responses. Both requests and responses can be edited on the fly, which can be great for experimenting and debugging and might help in narrowing down the problem.
Might this would help, caller is a method in Function object of javascript.
console.log(arguments.callee.caller.toString());
I've looked around for a suitable method to catch or prevent invalid JSON.parse calls, specifically in the case of WebSocket messages that don't involve type/catch block due to its performance hit.
I've almost fully moved my RESTful API to pure WebSocket API using JSON for communications. The only problem is, I can't figure out how to prevent JSON.parse from halting the app when a malformed message string is put through my onmessage function. All messages sent from the server are theoretically proper JSON that's been stringified, so the question also is, is this an edge case to worry about? Since the function thats used to send data from the serverside JSON stringifies before sending.
I'm using React and Redux with redux-thunk to open a WebSocket and add event listeners, so on a message the function below is being run.
function onMessage(msg) {
const data = JSON.parse(msg.data);
return {
type: data.type,
data: data.data
}
}
But this, of course, breaks if msg is not a valid JSON string then halting execution of the app.
So, without a try/catch block, is the only option to (somehow) ensure valid JSON is being sent? Or is this an edge case I shouldn't be worried about.
EDIT
This may not be such a big issue for client side since all messages are coming from a centralized point (the server), though on the other hand, quite a big issue for the server, seeing it's possible it to receive messages that have not been sent from the application.
Is try/catch really the devil it's made out to be? Since the only thing I can think of is to create a regex check, which in itself would end up becoming quite complicated.
don't involve type/catch block due to its performance hit.
Forget the myths. You want to catch an exception, like the one from JSON.parse, you use a try/catch block. It's that simple and not a significant performance hit. Of course you could also write your own logic to validate JSON strings (not with regex!), but that's gonna be a complete parser which just doesn't use exceptions to signal malformed input - and much slower than the native function.
Is this an edge case to worry about?
On the client, hardly. You're controlling the server and making sure to send only valid JSON strings. If you don't, I'd worry much more about the server than about a few clients crashing. The users will most likely reload the page and continue.
Though on the other hand, quite a big issue for the server, seeing it's possible it to receive messages that have not been sent from the application.
Yes. On the server you absolutely need to worry about malformed input. If sending invalid JSON makes your server crash, that's really bad.