Should Node.js be the basis of a website or be used only when certain functionality (eg. a chat room) is needed?
For example, in certain project, I'm using npm and have a node_modules folder and a package.json file in the main directory where all my code is.
However, the question I'm asking is whether or not this is good practice? Should Node.js only be used in, for example a chat-room/ folder? Or is it OK to use the same Node.js installation across an entire website, using when needed?
Many companies recently are having a hard job to detach their front end code from their server, hence, if you are starting a project from scratch, this is something you want to avoid.
Having low coupling applications will give you the flexibility to change your whole Front End stack without changing a single code in your API. Moreover, you will be able to use your API from different applications. They will now work independently.
The routing is how you define the URL, verb and opt parameters.
I hope that is what you have been struggling with.
Related
Background:
I am building a node.js-based Web app that needs to make use of various fonts. But it only needs to do so in the backend since the results will be delivered as an image. Consequently, the client/browser does not need access to the fonts at all in my case.
Question:
I will try to formulate the question as little subjective as possible:
What are the typical options to provide a node.js backend with a large collection of fonts?
The options I came up with so far are:
Does one install these hundreds or thousands of fonts in the operating system of the (in my case: Ubuntu) server?
Does one somehow serve the fonts from a cloud storage such as S3 or (online) database such as a Mongo DB server?
Does one use a local file system to store the fonts and retrieve them?
...other options
I am currently leaning towards Option 1 because this is the way a layman like me does it on a local machine.
Without starting a discussion here, where could I find resources discussing the (dis-)advantages of the different options?
EDIT:
Thank you for all the responses.
Thanks to these, I noticed that I need to clarify something. I need the fonts to be used in SVG processing libraries such as p5.js, paper.js, raphael.js. So I need to make the fonts available to these libraries that are run on node.js.
The key to your question is
hundreds or thousands of fonts
Until I took that in there is no real difference between your methods. But if that number is correct (kind of mind-boggling though) I would:
not install them in the OS. What happens if you move servers without an image? Or move OS?
Local File system would be a sane way of doing it, though you would need to keep track manually of all the file names and paths for your code.
MongoDB - store file names+paths in the collection..and store the actual fonts in your system.
In advent of moving servers you would have to pick up the directory where all the actual files are stored and the DB where you hold the file name+paths.
If you want you can place it all in a MongoDB but then that file would also be huge, I assume - that is up to you.
Choice #3 is probably what I would do in such a case.
If you have a decent enough server setup (e.g. a VPS or some other VM solution where you control what's installed) then another option you might want to consider is to do this job "out of node". For instance, in one of my projects where I need to build 175+ as-perfect-as-can-be maths statements, I offload that work to XeLaTeX instead:
I run a node script that takes the input text and builds a small but complete .tex file
I then tell node to call "xelatex theFileIJustMade.tex" which yields a pdf
I then tell node to call "pdfcrop" on that pdf, to remove the margins
I then tell node to call "pdf2svg", which is a free and amazingly effective utility
Then as a final step mostly to conserve space and bandwidth, I use "svgo" which is a nodejs based svg optimizer that can run either as normal script code, or as CLI utility.
(some more details on that here, with concrete code here)
Of course, depending on how responsive a system you need, you can do entirely without steps 3 and 5. There is a limit to how fast we can run, but as a server-side task there should never be the expectation of real-time responsiveness.
This is a good example of remembering that your server runs inside a larger operating system that might also offer tools that can do the job. While you're using Node, and the obvious choice is a Node solution, Node is also a general purpose programming language and can call anything else through spawn and exec, much like python, php, java, C#, etc. As such, it's sometimes worth thinking about whether there might be another tool that is even better suited for your needs, especially when you're thinking about doing a highly specialized job like typesetting a string to SVG.
In this case, LaTeX was specifically created to typeset text from the command line, and XeLaTeX was created to do that with full Unicode awareness and clean, easy access to fonts both from file and from the system, with full OpenType feature control, so would certainly qualify as just as worthwhile a candidate as any node-specific solution might be.
As for the tools used: XeLaTeX and pdfcrop come with TeX Live (installed using whatever package manager your OS uses, or through MiKTeX on Windows, but I suspect your server doesn't run on windows) pdf2svg is freely available on github, and svgo is available from npm)
I'm new to Javascript frameworks and looking for framework for my new projects. Until now i created simple apps using MVC framework Django with Bootstrap frontend. Thanks framework i got everything in one package with best-practice well know. For Javascript i used some jQuery libraries without understanding, just configured with doc help.
When i tried to write some Javascript on my own and found there are big changes in JS world (like ES6, TypeScript) i found it very usefull. When i found JS frameworks, i felt in love.
I have read about frameworks, watched some tutorials. As many other, i found React nice. But what i'm completely missing, is the server part. Especially React tutorials creates components or functions, nice UI, but don't cover what happens with data next. I know that React is ment as "V" in MVC. But what is the best-practice or wide-used extension for server part? Are there tutorials or book to take me further than just creating actions and UI?
Thanks for any links, i just need to point best direction. Or is React ment for just specific project parts and better to look elsewhere?
As you said, yes there are quite a number of tutorials and most of them don't cover how do you deploy node apps on the servers. I'll assume you have some server admin knowledge so I'll skip straight to the meat of the setup.
The Server Setup
Regardless of it being a simple static page, a single page API or a react app, you will need to have Node.js installed on any server you will want it to run. Second you will need a production process manager for Node.js. My personal favourite is PM2. The process manages makes sure your app is always on and restarts it if it goes down for whatever reason. You will also need a reverse proxy server (you need this for security and SEO purposes). Again, a go-to for it is Nginx (it's a competitor of Apache)
Two very good tutorials for setting up your server are
Tut #1
Tut #2
The App Setup
Apart from all the server setup you need to handle routing for your app (what happens when you to go to /blog or /login). A stable standard right now is Express.js. You can do without but then you will need to write a lot of the manual routing by hand in Node.js You will set up Express to server back your Index file and any others you may need.
A good tutorial for configuring your Express for a React app is Video Tut.
He does show a bit more but that is on later videos. You can stop once you get the gist of it.
Advanced Stuff
There's also the matter if you want the JavaScript to be rendered on the server or on the client side. For that you will need to make some more configuration for you app. The video tutorial I linked above should show you how to set that up as well if you are interested.
I have designed a meteor.js application and it works great on localhost and even when deployed to the internet. Now I want create a sign-up site that will spin up new instances of the application for each client who signs up on the back-end. Assuming a meteor.js application and python or javascript for the sign-up site, what high level steps need to be taken to implement this?
I am looking for a more correct and complete answer that takes the form of my poorly imagined version of this:
Use something like node or python to call a shell script that may or may not run as sudo
That script might create a new folder to hold instance specific stuff (like client files, config, and or that instances database).
The script or python code would deploy an instance of the application to that folder and on a specific port
Python might add configuration information to a tool like Pound to forward a subdomain to a port
Other things....!?
I don't really understand the high level steps that need to be taken here so if someone could provide those steps and maybe even some useful tools or tutorials for doing so I'd be extremely grateful.
I have a similar situation to you but ended up solving it in a completely different way. It is now available as a Meteor smart package:
https://github.com/mizzao/meteor-partitioner
The problem we share is that we wanted to write a meteor app as if only one client (or group of clients, in my case) exists, but that it needs to handle multiple sets of clients without them knowing about each other. I am doing the following:
Assume the Meteor app is programmed for just a single instance
Using a smart package, hook the collections on server (and possibly client) so that all operations are 'scoped' only to the instance of the user that is calling them. One way to do this is to automatically attach an 'instance' or 'group' field to each document that is being added.
Doing this correctly requires a lot of knowledge about the internals of Meteor, which I've been learning. However, this approach is a lot cleaner and less resource-intensive than trying to deploy multiple meteor apps at once. It means that you can still code the app as if only one client exists, instead of explicitly doing so for multiple clients. Additionally, it allows you to share resources between the instances that can be shared (i.e. static assets, shared state, etc.)
For more details and discussions, see:
https://groups.google.com/forum/#!topic/meteor-talk/8u2LVk8si_s
https://github.com/matb33/meteor-collection-hooks (the collection-hooks package; read issues for additional discussions)
Let me remark first that I think spinning up multiple instances of the same app is a bad design choice. If it is a stop gap measure, here's what I would suggest:
Create an archive that can be readily deployed. (Bundle the app, reinstall fibers if necessary, rezip). Deploy (unzip) the archive to a new folder when a new instance is created using a script.
Create a template of an init script and use forever or daemonize or jesus etc to start the site on reboot and keep the sites up during normal operation. See Meteor deploying to a VM by installing meteor or How does one start a node.js server as a daemon process? for examples. when a new instance is deployed populate the template with new values (i.e. port number, database name, folder). Copy the filled out template to init.d and link to the runlevel. Alternatively, create one script in init.d that executes other scripts to bring up the site.
Each instance should be listening to its own port, so you'll need a reverse proxy. AFAIK, Apache and Nginx require restarts when you change the configuration, so you'll probably want to look at Hipache https://github.com/dotcloud/hipache. Hipache uses redis to store the configuration information. Adding the new instance requires to add a key to redis. There is an experimental port of Hipache that brings the functionality to Nginx https://github.com/samalba/hipache-nginx
What about DNS updates? Once you create a new instance, do you need to add a new record to your DNS configuration?
I don't really have an answer to your question... but I just want to remind you of another potential problem that you may run into cuz I see you mentioned python, in other words, you may be running another web app on Apache/Nginx, etc... the problem is Meteor is not very friendly when it comes to co-exist with another http server, the project I'm working on was troubled by this issue and we had to move it to a stand alone server after days of hassle with guys from Meteor... I did not work on the issue, so I'm not able to give you more details, but I just looked up online and found something similar: https://serverfault.com/questions/424693/serving-meteor-on-main-domain-and-apache-on-subdomain-independently.
Just something to keep in mind...
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I am currently looking into alternative platforms to migrate an existing application onto, it started out as a prototype using asp.mvc but the majority of code is javascript with a simple asp mvc web service so as we are looking at taking it forward it seems sensible that we just scrap the current microsoft stack and just go for nodejs giving us more freedom of where and how we host our application, also we can reuse some of the models and code in both the web service and front end, although this would probably end up being a small amount.
This will probably be quite a large question encompassing many parts but I will put it out there anyway as I am sure this could be helpful to lots of other people looking at how they can move from something like .net/java to to node.js. As most of these statically typed languages have lots of patterns and practices used, such as Inversion of Control, Unit of Work, Aspect Oriented Programming etc it seems a bit strange moving towards another platform which doesn't seem to require as much structure in this area... So I have some concerns about migrating from my super structured and tested world to this new seemingly unstructured and dynamic world.
So here are the main things I would do in MVC and would want to do in node.js currently but am not quite sure the best way to achieve the same level of separation or functionality.
Routing to Actions
This mechanism in ASP MVC seems to be replaceable by Express in node.js, which will give me the same ability to map a route to a method. However there are a couple of concerns:
in ASP MVC my controllers can be dependency injected and have variables so actions are easy to test as everything they depend on can be mocked when needed and passed in via constructor. However as the method in express seems to not have a containing scope it seems like I would have to either use global variables or new up the variables internally. Is there a nice way to access my business logic containers in these routed methods?
Is there any way to autobind the model being sent? or at least get the json/xml etc in some useful manner? It appears that if you send over the correct mime type with the content it can be extracted but I have not seen any clear examples of this online. I am happy to use additional frameworks on top of Express to provide this functionality, as ideally I just want to make a POST request to /user/1 and have the user object pulled out and update the user with id 1 in the database (which we will talk about shortly).
What is the best way to validate the data being sent over? Currently in our front end javascript application we use KnockoutJS and all models use knockout observables and are validated using Knockout.Validation. I am happy to use Knockout Validation on node as the models are the contract between the front end and back end, however if there is a better solution I am happy to go look into it.
Database Interactions
Currently in .net land we use NHibernate for communicating with our relational databases and the MongoDB driver for communicating with our MongoDB Databases. We use a Generic Repository pattern and isolate queries into their own classes. We also use a Unit Of Work pattern quite heavily so we can wrap logical chunks of word into a transaction and then either commit it all if it goes well or roll back if it doesn't. This gives us the ability to mock out our objects at almost any level depending on what we want to test and also lets us change our implementations easily. So here is my concern:
Sequalize seems to be a good fit for replacing NHibernate, however it doesn't seem to have any sort of transaction handling making it very difficult to make a unit of work pattern. This is not the end of the world if it cannot be done in the same way, but I would like some way of being able to group a chunk of work in some way, so an action like CreateNewUserUnitOfWork which would take a model representing the users details, validate it, create an entry in one table, then create some relational data in others etc, get the users id from the database and then send that back (assuming all went well). From looking at the QueryChainer it seems like it provides most of the functionality but if it failed on the 3rd of 5 actions it doesn't appear simple to roll back, so is there some way to get this level of control?
Plugins / Scattered Configuration Data
This is more of a niche concern of our application, but we have the central application then other dlls which contain plugins. There are dropped into the bin folder and will then be hooked in to the routing, database and validation configuration. Imagine it like having the google homepage, and google maps, docs etc were all plugins which told the main application to route additional calls to methods within the plugin, which then had their own models and database configurations etc. Here is my concern around this:
There seems to be a way to update the routing by just scanning a directory for new plugins and including them (node.js require all files in a folder?) but is there some best practice around doing this sort of thing as I don't want each requestto have to constantly do directory scans. It is safe to assume that I am happy to just have the plugins in their right places at the time of starting the node application so no need to add plugins at runtime at the moment.
Testing
Currently there is Unit, Integration, Acceptance testing within the application. The Unit tests happen on both the front end and back end, so currently it will do javascript tests using JsTestDriver in our build script to confirm all the business logic works as intended in isolation etc. Then we have integration tests which currently are all done in C# which will test our controllers and actions work as expected as well as any units of work etc, this again is kicked off by the build script but can also be run via the Resharper unit test runner. Then finally we have acceptance tests which are written in c# using web driver which just target the front end and test all the functionality via the browser. My main concerns around this are:
What is the best practice and frameworks for testing nodejs? Currently most of the tests which test at the ASP layer are carried out via C# scripts creating the controller with mocked dependencies and running the actions to prove it works as intended using MVC Helpers etc. However I was wondering what level of support NodeJS has around this, as it doesn't seem simple (on a quick glance) to test node.js components without having node running.
Assuming there is some good way to test node.js can this be hooked into build scripts via command line runners etc? as we will want everything automated and isolated wherever possible.
I appreciate that this is more like 5+ smaller questions rolled up into one bigger question but as the underlying question is how to achieve good design with a large node js application I was hoping this would still be useful to a lot of people who like myself come from larger enterprise applications to node js style apps.
I recently recently switched from ASP MVC to Node.js and highly recommend switching.
I can't give you all the information you're looking for, but I highly recommend Sequelize as an ORM and mocha (with expect.js and sinon) for your tests. Sequelize is adding transaction support in the next version, 1.7.0. I don't like QueryChainer since each of it's elements are executed separately.
I got frustrated with Jasmine as a test framework, which is missing an AfterAll method for shutting down the Express server after acceptance tests. Mocha is developed by the author of Express.
If you want to have an Express server load in plugins, you could use a singleton pattern like this: https://github.com/JeyDotC/articles/blob/master/EXPRESS%20WITH%20SEQUELIZE.md
You will probably will also really like https://github.com/visionmedia/express-resource which provides a RESTful interface for accessing your models. For validation, I think you'll be happy with https://github.com/ctavan/express-validator
Why would you need to test Node modules without using Node? It's considered standard to call a test script with a Makefile, and you can add a pre-commit hook to git to run the tests before making a commit.
Is there a Javascript implementation of Git?
I'm wanting to use HTML5 to create a rich Javascript application and have the idea that I could use git to track changes to the data. So, I'm wondering if there is a javascript implementation of a git client, or maybe some way of controlling a git repository by making POST requests.
Check out: https://github.com/danlucraft/git.js
A pure js implementation of git in javascript.
This https://github.com/creationix/js-git is and will be the future!
It is backed by a kickstarter campaign and has a very sound software design.
Many of the client use-cases such as git clone have been implemented
According the answer to my question on the issue tracker [1]. The author also plans to implement parts of the server side stuff to allow you to build a server with it.
https://github.com/creationix/js-git/issues/33
You can use https://github.com/isomorphic-git/isomorphic-git
isomorphic-git is a pure JavaScript reimplementation of git that works in both Node.js and browser JavaScript environments. It can read and write to git repositories, fetch from and push to git remotes (such as GitHub), all without any native C++ module
isomorphic-git is a continuation of https://github.com/creationix/js-git
isomorphic-git would not have been possible without the pioneering work by
#creationix and #chrisdickinson. Git is a tricky binary mess, and without
their examples (and their modules!) I would not have been able to come even
close to finishing this. They are geniuses ahead of their time.
Related questions:
Run git push from javascript hosted in static site
How to implement Git in pure Javascript to create a GUI?
I guess it depends on what you need, but there's a few related projects out there.
The most "robust" implementation I can think of is this one by the 280North crew (of Cappuccino fame).
There's also some server-side JavaScript projects underway (e.g., http://github.com/ajaxorg/node-github), but that won't run entirely within a browser client.
Update (for anyone else who comes across this):
Please be sure to check out vanthome's answer. Tim Caswell's js-git project is well funded and undoubtedly the best answer here at this time.
I just recently wrote a Git client called Nougit. Maybe this resembles something you are looking for?
$ npm install nougit
https://github.com/gordonwritescode/nougit
This is a full GUI, but the module inside named "git.js" is an API I wrote specifically to do what you are describing. Yank out the file, and you can use express to handle the http routes.