I have some processing logic that is currently in my AngularJS front end, modularized away into services in order to keep my controllers clean. However, I need to bring some of this logic to my NodeJS backend.
For example:
function processPost(post){ // In reality I have many many functions so I would like to modularize
if(post.verified == true){
post.status = 'Safe to trust!'
}
}
Where should I put this code in my backend and how can I modularize it?
Should it be in the middleware or perhaps in my routes?
I would put it in a commonjs module and then use it in the route. This way you can use the exact same file/code on the server and client to verify the form data or whatever that function does. You'd have to load it on the client with something like systemjs or webpack. By the name of the function, I would guess you want to run that whenever the user posts a certain form, so putting it in the route, which is the code that will be called whenever they post, would make the most sense to me.
module.exports = function processPost(post){
if(post.verified == true){
post.status = 'Safe to trust!'
}
}
If you have multiple functions you'd like to export from one file you could do:
module.exports.processPost = function processPost(post){
if(post.verified == true){
post.status = 'Safe to trust!'
}
}
module.exports.processGet = function processGet(){
//Do work
}
Related
I want to try using scala.js on a SalesForce project.
SalesForce automatically injects the Visualforce.remoting.Manager.invokeAction(...) function to enable querying data and doing dml.
How can I call this function from scala.js?
In the crudest sense, you could just invoke it directly from global scope using js.Dynamic, something like:
js.Dynamic.global.Visualforce.remoting.Manager.invokeAction(...)
That works for a one-off, but if you're going to be working more with this Manager, I'd probably recommend creating a Scala.js facade for the Manager, and assigning the Manager object to that -- it'll probably result in better code in the long run.
You should be able to call any exposed javascript method from scala-js using
scala.scalajs.js.eval(x: String)
Example for calling a bootbox modal:
import scala.scalajs.js.annotation.JSExportTopLevel
import scala.scalajs.js
#JSExportTopLevel("myCallBack")
protected def confirmCallback() = // do stuff
def askAQuestion(): Unit = {
js.eval("bootbox.confirm(\"Your question goes here.\", function(result) { if (result == true) { myCallBack(); }});")
}
Of course for this to work you have to include the javascript libary in your project.
Since this is a quite messy solution, you might consider writing your own facade.
I am writing a standalone web scraper in Node, run from command line, which looks for specific data on a set of pages, fetches page views data from Google Analytics and saves it all in an MySQL database. Almost all is ready, but today I found a problem with the way I write data in the db.
To make thing easier let's assume I have an index.js file and two controllers - db and web. Db reads/writes data to db, web scraps the pages using configurable amount of PhantomJs instances.
Web exposes one function checkTargetUrls(urls, writer)
where urls is an array with urls to be checked and writer is an optional parameter, called only if it is a function and there is data to be written.
Now the way I pass the writer is obviously wrong, but looks as follows (in index.js):
some code here
....
let pageId = 0;
... some promises code,
which checks validy of urls,
creates new execution in the database, etc.
...
.then(ulrs => {
return web.checkTargetUrls(urls,
function(singleUrl, pageData) {
...
a chain of promisable functions from db controller,
which first lookup page id in the db, then its
puts in the pageId variable and continues with write to db
...
}).then(() => {
logger.info('All done captain!');
}).catch(err => {logger.error(err})
In the effect randomly pageId gets overwritten by id of preceeding/succeeding page and invalid data is saved. Inside web there are up to 10 concurrent instances of PhantomJs running, which call writer function after they analyzed a page. Excuse me my language, but for me an analogy for that situation would be if I had, say, 10 instances of some object, which then rely for writing on a singleton, which causes the pageId overwriting problem (don't know how to properly express in JS/Node.js terms).
So far I have found one fix to the problem, but it is ugly as it introduces tight coupling. If I put the writer code in a separate module and then load it directly from inside the web controller all works great. But for me it is a bad design pattern and would rather do it otherwise.
var writer = require('./writer');
function checkTargetUrls(urls, executionId) {
return new Promise(
function(resolve, reject) {
let poolSize = config.phantomJs.concurrentInstances;
let running = 0;
....
a bit of code goes here
....
if (slots != undefined && slots != null && slots.data.length > 0) {
return writer.write(executionId, singleUrl, slots);
}
...
more code follows
})
}
I have a hard time findng a nicer solution, where I could still pass writer as an argument for checkTargetUrls(urls, writer) function. Can anyone point me in the right direction or suggest where to look for the answer?
The exact problem around your global pageId is not entirely clear to me but you could reduce coupling by exposing a setWriter function from your 'web' controller.
var writer;
module.exports.setWriter = function(_writer) { writer = _writer };
Then near the top of your index.js, something like:
var web = require('./web');
web.setWriter(require('./writer'));
I read a lot about Express / SocketIO and that's crazy how rarely you get some other example than a "Hello" transmitted directly from the app.js. The problem is it doesn't work like that in the real world ... I'm actually desperate on a logic problem which seems far away from what the web give me, that's why I wanted to point this out, I'm sure asking will be the solution ! :)
I'm refactoring my app (because there were many mistakes like using the global scope to put libs, etc.) ; Let's say I've got a huge system based on SocketIO and NodeJS. There's a loader in the app.js which starts the socket system.
When someone join the app it require() another module : it initializes many socket.on() which are loaded dynamically and go to some /*_socket.js files in a folder. Each function in those modules represent a socket listener, then it's way easier to call it from the front-end, might look like this :
// Will call `user_socket.js` and method `try_to_signin(some params)`
Queries.emit_socket('user.try_to_signin', {some params});
The system itself works really well. But there's a catch : the module that will load all those files which understand what the front-end has sent also transmit libraries linked with req/res (sessions, cookies, others...) and must do it, because the called methods are the core of the app and very often need those libraries.
In the previous example we obviously need to check if the user isn't already logged-in.
// The *_socket.js file looks like this :
var $h = require(__ROOT__ + '/api/helpers');
module.exports = function($s, $w) {
var user_process = require(__ROOT__ + '/api/processes/user_process')($s, $w);
return {
my_method_called: function(reference, params, callback) {
// Stuff using $s, $w, etc.
}
}
// And it's called this way :
// $s = services (a big object)
// $w = workers (a big object depending on $s)
// They are linked with the req/res from the page when they are instantiated
controller_instance = require('../sockets/'+ controller_name +'_socket')($s, $w);
// After some processes ...
socket_io.on(socket_listener, function (datas, callback) {
// Will call the correct function, etc.
$w.queries.handle_socket($w, controller_name, method_name, datas);
});
The good news : basically, it works.
The bad news : every time I refresh the page, the listeners double themselves because they are in a loop called on page load.
Below, this should have been one line :
So I should put all the socket.on('connection'...) stuff outside the page loading, which means when the server starts ... Yes, but I also need the req/res datas to be able to load the libraries, which I get only when the page is loaded !
It's a programing logic problem, I know I did something wrong but I don't know where to go now, I got this big system which "basically" works but there's like a paradox on the way I did it and I can't figure out how to resolve this ... It's been a couple of hours I'm stuck.
How can I refacto to let the possibility to get the current libraries depending on req/res within a socket.on() call ? Is there a trick ? Should I think about changing completely the way I did it ?
Also, is there another way to do what I want to do ?
Thank you everyone !
NOTE : If I didn't explain well or if you want more code, just tell me :)
EDIT - SOLUTION : As seen above we can use sockets.once(); instead of sockets.on(), or there's also the sockets.removeAllListeners() solution which is less clean.
Try As Below.
io.sockets.once('connection', function(socket) {
io.sockets.emit('new-data', {
channel: 'stdout',
value: data
});
});
Use once instead of on.
This problem is similar as given in the following link.
https://stackoverflow.com/questions/25601064/multiple-socket-io-connections-on-page-refresh/25601075#25601075
What is the best approach to handle authentication on a bunch of Javascript actions on a page without littering the code base with "if authenticated()" checks?
For example: we have 10 like buttons, some comment buttons and a few other actions that require authentication. When a user is not authenticated, we want to redirect them to a login/signup page. However, we want to avoid littering the code with if (user.isAuthenticated()) { xxx } calls. In our particular case we want to use these mostly for events in backbone, although I don't think that matters for the general question.
With the help of underscorejs. You can write something like this:
function authWrapper(func){
if (user.isAuthenticated()) {
func.apply(this, _.rest(arguments));
}else{
...
}
}
Suppose you're using jQuery, when binding the events, write this:
$(...).bind('event', _.wrap(function(...){...}, authWrapper));
or
$(...).bind('event', _.wrap(thehandler, authWrapper));
How about creating a method that does the checking, using a callback for the method that should be called if authentication is ok? Something like:
function checkNdRun(cb,params){
params = [].slice.call(params);
if (/*[authenticationCheckingLogic here]*/){
cb.apply(null,params);
} else {
alert('please login first');
}
}
//usage example
somebutton.onclick =
function(e){checkNdRun(functionToRun,e,/*[other parameters]*/);};
We are attempting to only make available certain functions to be run based on what request address is.
I was wondering how we could do this:
if(condition1)
{
$(document).ready(function() {
...
...
// condition1's function
});
}
else if(condition2)
{
$(document).ready(function() {
...
...
// condition2's function
});
else if...
I was wondering what a good pattern would work for this? since we have all of our functions in one file.
It depends on what your conditions are like...
If they're all of a similar format you could do something like
array = [
["page1", page1func],
["page2", page2func],
...
]
for(i=0; i<array.length; ++i)
{
item = array[i];
if(pageName == item[0]) $(document).ready(item[1]);
}
I like Nick's answer the best, but I might take a hash table approach, assuming the 'request address' is a known fixed value:
var request_addresses = {
'request_address_1': requestAddress1Func,
'request_address_2': requestAddress2Func
};
$(document).ready(request_addresses[the_request_address]);
Of course, request_addresses could look like this as well:
var request_addresses = {
'request_address_1': function () {
/* $(document).ready() tasks for request_address_1 */
},
'request_address_2': function () {
/* $(document).ready() tasks for request_address_2 */
}
};
I don't see any problem with that. But this might be better:
$(document).ready(function() {
if (condition1)
// condition1's function
else if (condition2)
// condition2's function
...
});
It would probably be cleaner to do the site URL checking on the server (if you can?) and include different .js files depending on the condition, e.g.
** Using ASP.NET MVC
<html>
<head>
<%
if(Request.Url.Host == "domain.com")
{ %><script type="text/javascript" src="/somejsfile1.js"></script><% }
else
{ %><script type="text/javascript" src="/somejsfile2.js"></script><% }
%>
</head>
</html>
This way, each js file would be stand-alone, and also your HTML wouldn't include lines of JS it doesn't need (i.e. code meant for "other" sites)
Maybe you could give more detail as to what exactly you are doing, but from what I can tell why wouldn't you just make a different JS file containing the necessary functions for each page instead of trying to dump all of them into one file.
I would just leave all of the functions in one file if that's the way they already are. That will save you time in rework, and save the user time with reduced latency costs and browser caching. Just don't let that file get too large. Debugging and modifying will become horrendous.
If you keep them all in one file, Add a script onn each page that calls the one(s) you want.
function funcForPage1() {...}
function funcForPage2() {...}
Then, on page1
$(funcForPage1);
etc.
Instead of doing what you're planning, consider grouping the functions in some logical manner and namespace the groups.
You'd have an object that holds objects that holds functions and call like this:
serial = myApp.common.getSerialNumber(year,month);
model = myApp.common.getModelNumber(year);
or
myApp.effects.blinkText(textId);
If you wanted to hide a function or functions per page, I suppose you could null them out by function or group after the load. But hopefully having things organized would satisfy your desire to clean up the global namespace.
I can't think of a particularly elegant way to achieve this using only JavaScript. If that's all that's available to you, then I'd at least recommend you use a switch statement or (preferably) a hash table implementation to reference your functions.
If I had to do something like this, given my development environment is fully under my control, I'd break up the JavaScript into individual files and then, having determined the request, I would use server side code to build a custom bundled JavaScript file and serve that. You can create cache copies of these files on the server and send client side caching headers too.
This article, which covers this technique as part of a series may be of interest to you.