I am starting work on an existing Node/Express/Mongoose project -- I am currently going through the code and trying to understand how it works. The Express routes are being generated dynamically, that is to say there are functions that set up the routes -- the http method, the path to resource, the Express app, etc are passed into those functions as parameters and the routes are constructed at runtime. There are many nested functions -- it's a complex project -- but it all ends up with the line
app[method](path, requireAuthentication, requireAdminAuthentication, validateRequestBody, done);
which sets up the route.
Is there any way to debug a route after it's been constructed? That is to say, if I wanted to put some debug() statements in the POST route for '/widgets', but that route doesn't exist anywhere in the code, and in fact doesn't exist at all until after the app starts, where do I put the statement?
Well, you can use the DevTools and can get into the Sources tab and then press CTRL+O and then if you start typing the file name (which contains your dynamic routes) you can find it listed over there, just open it and set the breakpoint(s) where ever it is required.
Hope this helps!
Related
I have a few tiny node apps doing some basic stuff like checking things and filling my DB (triggered by CRON). In my Nuxt app I will need to use part of what is insinde these Node apps. What is the best way to organise it, do I keep them separated or do I fuse them with my Nuxt app ? Do I copy what I need from these node apps and adapt it in Nuxt, do I use the serverside middlewares to add those node apps inside my Nuxt project or do I create my Nuxt app with express and I use /server/index.js to add my node apps there in some way ?
Let's take an example. You have a node app that check very hour some data and fill a DB. In the Nuxt app you have a page showing the content of the DB but you want first to be sure that nothing new has to be added in the DB since the last hour. The code I would have to run in th every Nuxt page is the same code as the Node app (check and fill the DB). It looks a bit stupid (and hard to maintain and update) to have twice the same code at two places. But I 'm not sure how would I have this node app running every hour in my Nuxt app. Any advice would be greatly appreciated
Here is a control flow that may help your thinking about designing this CRON microservice. There are many ways to do this, and this may not be the best approach, but I think it will work for your use case.
Have a services directory in your server (can also be called middleware).
Include a cron.js file that contains the logic for the task runner.
Within cron.js, issue a scheduled response from node to Vue, such as a JSON keyword like res.JSON({message: 'checkNewData'}). This will be something called a "server sent event". A server sent event is simply an event that happens autonomously on a defined schedule within node environment.
In Vue, at the root level App.vue, use the created() hook to register an event listener that will listen for the server sent "checkNewData" JSON object. When this event listener hears the JSON response, it should trigger Vue to check the appropriate component, package up any new data, and send it down to the DB in a post or put http call, depending on whether you're adding new data, or replacing the old with the new.
This configuration would give you a closed-loop system for automatic updates. The next challenge would be making this operation client-specific, but that is something to worry about once you got this working. Again, others may have a different approach to this, but this is how I would handle the flow.
In Meteor, I have installed the spiderable package, which allows the application to be crawled by search engines. However, I want to exclude certain paths from being crawled.
For example, example.com/abc/[path] should not be crawled, whereas example.com/[path] should be.
I am unsure of how to do this. One guess is to include a robots.txt in the /public directory, and use regex as described here. However, the url doesn't contain the #! as it did in this question. Is that relevant?
My current implementation is a bit more complicated, and it's based on the following quote from the package's README.md:
In order to have links between multiple pages on a site visible to
spiders, apps must use real links (eg ) rather than
simply re-rendering portions of the page when an element is clicked.
At the moment, when the page is rendered, I test whether there's a /abc in the root of the path, and then set a persistent session variable. This allows me to make all paths in my pages' links not contain the /abc prefix. When a link is clicked, it will check whether the session variable is set and append to the path in an onBeforeAction() function, which allows the right template to be rendered. In doing so, I am hoping those links won't be visible to the spider, but I am unsure of the reliability of such a method.
tl;dr - How to exclude certain paths from being crawled in Meteor?
It kind of depends on what you're doing with the folders you don't want crawled. If they're just going to be used on the server side, you can use the /private/ folder. If you want them accessible, but uncrawlable, you can build in access to folders with a /.period/ in them, which makes them invisible to Meteor, but you can access via the connectHandlers and webApp properties similar to my answer here.
If you want them to be processed by Meteor as normal (e.g. javascript files) but then be inaccessible to the spiderable package, I'd suggest asking in meteor-core.
I am learning the source code of hexo, a project based on node.js.
And there is a file init.js:
if (results.config){
require('./plugins/tag');
require('./plugins/deployer');
require('./plugins/processor');
require('./plugins/helper');
require('./plugins/filter');
require('./plugins/generator');
}
why these require statements have no reference? So I checked each index.js under these folder(e.g. tag), the index.js is looking like:
require('./init');
require('./config');
require('./generate');
require('./server');
require('./deploy');
require('./migrate');
require('./new');
require('./routes');
require('./version');
require('./render');
No exports found. I am wondering how these requires work.
I looked at the source you're talking about, and the basic answer to your question is that the code in those requires gets run. Normally, you're right that you need to have some kind of export to make use of objects inside those files, but hexo is being a bit nonstandard.
Instead of having each module be independent and fairly agnostic (except via requires), what they're doing is creating an object called 'extend' (look in extend.js) then each of those individual files (e.g. ./init, ./migrate, etc) require extend.js and hang new objects and functions on it in a sort of namespaced fashion.
If you look at the end of those files you'll see something calls to extend.tag.register and others. Modules are cached when required, so in practice it acts something like a singleton in other languages the way they're doing it.
As Paul points out, the requires you see should be considered as functional units themselves, rather than returning any useful values. Each of the files calls an function to modify an internal state.
In the code we use something like this:
$('#wrapper').html('//app/views/content.ejs', {foo:"bar"});
And when we build the app, this still stays the same, although the content.ejs file is built into production.js.
So my question is, what should we do so that when we build the app, these references point to ejs files inside of production.js?
We are using JMVC 3.2.2
We've also tried using this way:
$('#wrapper').html( $.View('//app/views/content.ejs', {foo:"bar"}) );
Your views are not getting added to production.js; you need to steal each one of them:
steal('//app/views/content.ejs');
JMVC 3.1:
steal.views('//app/views/content.ejs');
Got the answer in JMVC forum: https://forum.javascriptmvc.com/topic/#Topic/32525000000958049
Credit to: Curtis Cummings
Answer:
The paths to the views do not need to change.
When the production.js file is created, your views are included and
get preloaded when the script runs. When you reference:
'//app/views/content.ejs', view first checks if the view file you are
requesting has been preloaded and if it has, will use that instead of
making a request for the .ejs file.
Hey. So I have a Rails app that's getting deployed to a production machine that serves it via Apache/Passenger to two subURIs.
/app
/app_2
both the above subURIs are running the same codebase. It's just two symlinks to the public dir that are both pointed to via Passenger by:
RailsBaseURI "/app"
RailsBaseURI "/app_2"
Now, imma big fan of jQuery. So I'm using it with a couple of plugins and one of them is an autocomplete function:
$("#municipality_name").autocomplete('autocomplete_municipality', {
matchContains: true,
scroll: false
}).result(function(event, data, formatted) {
$.post('fill_state', {city: formatted}, null, 'script');
});
The problem I'm having is when this gets run in production, the url that gets called is
http://www.app.com/autocomplete_municipality
rather than
http://www.app.com/app/autocomplete_municipality
- or -
http://www.app.com/app_2/autocomplete_municipality
(I know it's lacking the controller name in there, I have routes to give me these paths.)
Anyways, my question is, how the hell do I tell jQuery to not blow away the subURI path? config.action_controller.relative_url_root = "/signup_2" doesn't seem a viable option because I have multiple subURIs here. I am also loath to change the path in the jQuery method to be autocomplete('app/autocomplete_municipality'... since I still don't get access to my two subURIs. Maybe this is the only option.
The reason for the two URIs is mostly for internal testing --> production switchover (as we open the new site first to the company internally, then to the public at large) and we wanted to be able to launch everything at once from /app_2 (our testing path) to /app (the public path) without having to change anything on the configuration side.
Thoughts?
Or you could use a named route and a view helper to define the url in a JavaScript var.
var autocomplete_path = <%= autocomplete_municipality_path %>;
Yeah, that's ugly.
(deploying your testing and production apps to separate domains (or subdomains) instead of separate dirs will eliminate these problems)
You could parse the current location and place that into the autocomplete's path.
var pathname = window.location.pathname;
Then concatenate the first part of the path (or however deep you need to go) into the first param of autocomplete.
(Personally, maybe a 3rd level domain to separate the applications would have been cleaner and all your routing code would have remained the same per domain.)