I know a couple of approaches using Ajax to manipulate database, I'm just not quite sure which one (if any) would be proper and most used so I have a couple of questions.
Have one url to handle all kind of requests accordingly or multiple for each specific reason?
Would it be better to have a single function containing Ajax request to which an object parameter can be passed to manipulated request (Url, Type, etc.) or multiple functions, each with static properties of Ajax request for specific thing?
What would be the most clean design pattern of structuring data manipulation? Thanks!
I'm not sure anyone can answer this definitively without more specific information about the domain you're dealing with. It's possible that there is a case where one function could deal with multiple CRUD operations based on the parameters passed in, and a clever architecture could make decisions based on the parameters.
In probably most cases the decision-making process would become very ugly (large switch statements, for example), and here more explicit functions and methods would be more descriptive and appropriate.
As mentioned by Steve in the comments, REST could be an appropriate way to differentiate server-side calls while using a single client-side AJAX function. All that's needed on the client is knowledge of the url to be used.
Related
There's a bit of someone else's code I am trying to add functionality to. It's using websockets to communicate with a server which I will most likely not be able to change (the server runs on a 3$ micro-controller...)
The pattern used, for instance when uploading data to the server, consists in using global variables, then sending a series of messages on the socket, as well as having an 'onmessage' which will handle the response. This seems clumsy, given that it assumes that there is only ever one socket call made at a time (I think the server guarantees that in fact). The messages sent by the server can be multiple, and even figuring out when the messages are finished is fiddly.
I am thinking of making things so that I have a better handle on things, mostly w.r.t. being able to know when the response has arrived (and finished), going to patterns like
function save_file(name, data, callback) {
}
And perhaps at some point I can even turn them into async functions.
So couple of ideas:
- is there some kind of identifier that I could find in the websocket object that might allow me to better string together request and response?
- short of that, what is the right pattern? I started using custom events, that allows me to much better tie the whole process, where I can supply a callback by attaching it to the event, but even doing removeEventListener is tricky because I need to keep reference to every single listener to make sure I can remove them later.
Any advice anyone?
I'm wondering which of the following would be the best way to pass server data and use it in a function, especially if the function is to be used by a component
Method 1
function doSomething(elm, serverTime) {
// Do something
}
<script>
doSomething('foo', '<% php server time %>');
</script>
vs
Method 2
<div id="foo" data-server-time="<% php server time %>"></div>
function doSomething(foo) {
var serverTime = getElementById("server-time").dataset.servertime;
// Do something
}
<script>
doSomething('foo');
</script>
Method 3
Other suggestions?
Would like to do something like the following but not sure how?
document.getElementById("foo").doSomething() ?
For me, case 1 would be better.
code would have less coupling
code would not use global vars (document.getElementById)
you could reuse your function in other places that do not have DOM, like in the server.
I would argue in this case the 1st is better in this simple example because sever time isn't really attached to any specific div element.
Just make sure no matter what you do that there are no XSS security holes.
You are at a crossroads looking for common practice, to which one isn't more prevalent over another. Any great sage may tell you, which you choose isn't as important as making the same choice again; that is, be consistent.
Depending on the type of information, I would either pass it in the:
HTTP header (e.g., via HTTP Cookie)
Querystring (if redirection is used)
External JSON file (e.g., server.json), loaded via JS
Embedded JSON object (e.g., window.SERVER = {'server_time': <%php ...%>};)
In your case, keeping it closer to the JavaScript makes more sense and is easier to maintain, if the JS is the main place you're working. Therefore, Method 1 is both cleaner and easier to make changes in the future. Method 2, would require sifting through the HTML and making sure you are modifying the correct line.
Though, I'm somewhat partial to keeping server data as an external JSON, or embedded JSON object. So if you needed to track other server data/metadata, it's easy to add to it.
I would argue that all of them are the same and depending on your coding manner, they woulh have the same performance performand.
Let's not forget that nowadays, the most common way is to attach event listeners to elements (jQuery, Angular and .., use heavily event listeners).
First of all, I must say that I'm very new to Google Closure, but I'm learning :)
Okay, so I'm making a web app that's going to be pretty big, and I thought it would be good to manage all the AJAX requests in one XhrManager. No problem there.
But, is it possible to have some kind of default callback that would check for errors first, display them if necessary and then when it passes, launch the "real" callback? I'm talking about a feature like the decoders in amplify.js. Here's their explanation:
Decoders allow you to parse an ajax response before calling the success or error callback. This allows you to return data marked with a status and react accordingly. This also allows you to manipulate the data any way you want before passing the data along to the callback.
I know it sounds complicated (and it is, really), so the fact that I'm not that good at explaining helps a good deal too, but yeah.
The solution I have in my head right now is creating an object that stores all the 'real callbacks', of which the 'error-checking callback' would execute the correct one after it finished checking, but I feel that's a bit hack-ish and I think there has to be a better way for this.
Since you always have to decode/verify your AJAX data (you never trust data returned from a server now do you?), you're always going to have different decoders/verifiers for different types of AJAX payloads. Thus you probably should be passing the decoder/verifier routine as the AJAX callback itself -- for verifications common to all data types, call a common function inside the callback.
An added benefit of this will be the ability to "translate" unmangled JSON objects into "mangled" JSON objects so that you don't have to do use quoted property access in your code.
For example, assume that your AJAX payload consists of the following JSON object:
{ "hello":"world" }
If you want to refer to the hello property in your code and still pass the Compiler's Advanced Mode, you'll need to do obj["hello"]. However, if you pass in your decoder as the callback, and the first line you do:
var decoded = { hello:response["hello"] };
then do your error checking etc. before returning decoded as the AJAX response. In your code, you can simply do obj.hello and everything will be nicely optimized and mangled by Advanced Mode.
I've never really had to return javascript from an XHR request. In the times I've needed to apply behaviour to dynamically loaded content I could always do it within my script making the call.
Could someone provide actual real world cases just so I'm aware, of when you'd actually need to do this ( not for convenience ), or some reasons of why in some cases it's better to return js along with the other content instead of building that functionality in your callback?
The only scenario that's coming to my head is on a heavily customized site, if the site supports multiple languages for example and the functionality changes depending on the language, and ajax is used to pull in dynamic content and perhaps in some languages a certain behavior needs to happen while in others another needs to happen and it's more efficient returning js in script blocks instead of dumping all that logic into a callback.
Sometimes it is more convenient to "prepare" the JavaScript code on the server side. You can use the server's programming or scripting language to generate the code and you can fill it with values from the database. This way most of the logic takes place on the server and not the client. But it is really a matter of taste. OK, that wasn't a real world case but maybe my opinion is helpful anyway.
We use XHR to request an entire web page that includes java script for menus etc. We then replace the current page with the new one that has been sent over XHR
I've started to wrap my functions inside of Objects, e.g.:
var Search = {
carSearch: function(color) {
},
peopleSearch: function(name) {
},
...
}
This helps a lot with readability, but I continue to have issues with reusabilty. To be more specific, the difficulty is in two areas:
Receiving parameters. A lot of times I will have a search screen with multiple input fields and a button that calls the javascript search function. I have to either put a bunch of code in the onclick of the button to retrieve and then martial the values from the input fields into the function call, or I have to hardcode the HTML input field names/IDs so that I can subsequently retrieve them with Javascript. The solution I've settled on for this is to pass the field names/IDs into the function, which it then uses to retrieve the values from the input fields. This is simple but really seems improper.
Returning values. The effect of most Javascript calls tends to be one in which some visual on the screen changes directly, or as a result of another action performed in the call. Reusability is toast when I put these screen-altering effects at the end of a function. For example, after a search is completed I need to display the results on the screen.
How do others handle these issues? Putting my thinking cap on leads me to believe that I need to have an page-specific layer of Javascript between each use in my application and the generic methods I create which are to be used application-wide. Using the previous example, I would have a search button whose onclick calls a myPageSpecificSearchFunction, in which the search field IDs/names are hardcoded, which marshals the parameters and calls the generic search function. The generic function would return data/objects/variables only, and would not directly read from or make any changes to the DOM. The page-specific search function would then receive this data back and alter the DOM appropriately.
Am I on the right path or is there a better pattern to handle the reuse of Javascript objects/methods?
Basic Pattern
In terms of your basic pattern, can I suggest modifying your structure to use the module pattern and named functions:
var Search = (function(){
var pubs = {};
pubs.carSearch = carSearch;
function carSearch(color) {
}
pubs.peopleSearch = peopleSearch;
function peopleSearch(name) {
}
return pubs;
})();
Yes, that looks more complicated, but that's partially because there's no helper function involved. Note that now, every function has a name (your previous functions were anonymous; the properties they were bound to had names, but the functions didn't, which has implications in terms of the display of the call stack in debuggers and such). Using the module pattern also gives you the ability to have completely private functions that only the functions within your Search object can access. (Just declare the functions within the big anonymous function and don't add them to pubs.) More on my rationale for that (with advantages and disadvantages, and why you can't combine the function declaration and property assignment) here.
Retrieving Parameters
One of the functions I really, really like from Prototype is the Form#serialize function, which walks through the form elements and builds a plain object with a property for each field based on the field's name. (Prototype's current – 1.6.1 – implementation has an issue where it doesn't preserve the order of the fields, but it's surprising how rarely that's a problem.) It sounds like you would be well-served by such a thing and they're not hard to build; then your business logic is dealing with objects with properties named according to what they're related to, and has no knowledge of the actual form itself.
Returning Values / Mixing UI and Logic
I tend to think of applications as objects and the connections and interactions between them. So I tend to create:
Objects representing the business model and such, irrespective of interface (although, of course, the business model is almost certainly partially driven by the interface). Those objects are defined in one place, but used both client- and server-side (yes, I use JavaScript server-side), and designed with serialization (via JSON, in my case) in mind so I can send them back and forth easily.
Objects server-side that know how to use those to update the underlying store (since I tend to work on projects with an underlying store), and
Objects client-side that know how to use that information to render to the UI.
(I know, hardly original!) I try to keep the store and rendering objects generic so they mostly work by looking at the public properties of the business objects (which is pretty much all of the properties; I don't use the patterns like Crockford's that let you really hide data, I find them too expensive). Pragmatism means sometimes the store or rendering objects just have to know what they're dealing with, specifically, but I do try to keep things generic where I can.
I started out using the module pattern, but then started doing everything in jQuery plugins. The plugins allow to pass page specific options.
Using jQuery would also let you rethink the way you identify your search terms and find their values. You might consider adding a class to every input, and use that class to avoid specifically naming each input.
Javascript is ridiculously flexible which means that your design is especially important as you can do things in many different ways. This is what probably makes Javascript feel less like lending itself to re-usability.
There are a few different notations for declaring your objects (functions/classes) and then namespacing them. It's important to understand these differences. As mentioned in a comment on here 'namespacing is a breeze' - and is a good place to start.
I wouldn't be able to go far enough in this reply and would only be paraphrasing, so I recommend buying these books:
Pro JavaScript Design Patterns
Pro Javascript techniques