Currently I have two node processes that I need to run. One is my own custom app and another is iframely, another Node app that returns embed codes. Right now, I have the node app make requests to, say http://localhost:8061/iframely?url=.... But now, switching to Heroku, only 1 process in my app can accept HTTP requests (this is the process designated with web: in the Procfile, as I understand it).
To run iframely along side my app, do I have to create another app? Or can I have the two processes speak to each other bypassing http? Keeping in mind I don't want to devilry restructure iframely.
It sounds from your description as if you have two separate node apps, and each one serves its own purpose.
Regardless of how these apps are implemented, the best way to handle this sort of thing is via multiple heroku apps. This is what they were designed for!
You can think of a Heroku app as a single-purpose web server. If you have one codebase that does something independent of another, create two Heroku apps. If you have 3 codebases that all do different things, make 3 Heroku apps.
In addition to this being the best way to handle this sort of thing in general (you get more reliability as each service has it's own servers), it's also cheaper: you get 1 free Heroku dyno per app, this means you'll have 2x's the free web servers you would have otherwise.
Related
I'm relatively new to node.js and I'm trying to make a game that only uses one accessible URL but has multiple pages. In my game I'm going to be using the socket.io and express modules. I really want to have one URL that the client will access which will give them the title screen and then the menu page. There will be buttons on this page which will send the client to different servers that run games. The only way that I can think of how to do this is to have multiple servers running on different pages and have the buttons contain links. I know that it is possible to have multiple servers running on the same URL. One example of this is the diep.io online game. Is it possible to accomplish this with node.js? If so, how would I go about making this happen? Should I be using ports to create these pages?
I believe what you're talking about is referred to as "horizontal scaling" where instead of having one beefy server you have many lower-end servers that run the same code.
Or since you're using Socket.io there's a concept of "rooms" that you can use.
My memory on both of these topics is a bit cloudy so I can't help much more, but you should be able to Google them now.
This channel has tutorials on JS / Node game dev:
https://www.youtube.com/user/RainingChain/featured
I have searched everywhere for a solution to this problem but for whatever reason I cannot find a clear answer as to how I can carry out this task.
I have built a very simple server with node.js that accesses two numbers from a website API and outputs them onto a localhost port on my computer as shown below:
My question is how can I take my server and make it accessible to applications without having to go into the command line and run the server file? Is there a way I can host it online instead of locally so that I can distribute the application and anyone with the application can pull from this server? What would be the best way to go about accomplishing this task?
Heroku is probably one of the easiest ways to get started with deploying the application: https://devcenter.heroku.com/articles/getting-started-with-nodejs#introduction
However, if you are not familiar with git, Microsoft Azure Webapps is also another great option: https://tryappservice.azure.com/
They both offer free plans which should get you up and running fast!
Question 1: how can I take my server and make it accessible to applications without having to go into the command line and run the server file?
To start a node server you will always need to use the console. If you are not used to,it's time to start :) .
You will be using it not only for node servers but for administrating (almost) every server in the world.
Question 2: Is there a way I can host it online instead of locally ?
There are a lot of nodejs hosting platforms, you can choose between PaaS solutions or IaaS solutions ( AWS EC2, Digital Ocean, etc.) . Probability the easiest way to start ar PaaS services, in this blogbost you will find a good list of PaaS hosting providers. Some of them have free plans.
Take a look at Heroku. They have a free tier (with limitations) and a hobby tier for $7/month.
I have implemented that with Amazon AWS, they offer free option for a limited time, but still great because you have full control on the console, you should be careful while installing the machine on cloud, you have many guides online that you can follow for steps, this one for example has step by step info, try to follow the steps and you will be able to run your app and access it from any public network.
I'm currently building a small Node.js Website and I want to host it on Azure. I wanted to ask whether I should separate the Front- and Backend parts of the app into two different Azure Websites, or just put the whole app into one Azure Website slot.
The reason I think separating into two parts might make sense is because I want to access the Backend part not only from the corresponding website but also from other parts (e.g. a mobile app). And I think that in this setup updating the Frontend part would also mean downtime for the Backend API.
On the other hand, two websites make it harder to query the Backend from the Frontend (hardcoded URLs, socket.io can't just easily be setup anymore, etc.)
What would you recommnd in my case?
Thanks a lot.
ExpressJS
Socket.io
some Database
jquery on client / cheerio on serverSide (DOM parsing)
Don't have any experience with Azure.. Switch to Linux!!
Have access to your apps, data, should be very important and one of things the app is built around..
Azure websites supports creating staged deployments for you site http://azure.microsoft.com/en-us/documentation/articles/web-sites-staged-publishing/
This means that you don't need to worry about downtime for your backend when you're updating your front end, since you can update the front end in the staging environment, then when it's ready to be deployed you just swap the staging and production environments and all requests get immediately redirected to the updated version of your site with no downtime.
Zain is correct in that you can create slots in your Azure website and swap them to prevent downtime. For example, you can create a staging slot and a production slot and push your updates to staging. Once satisfied that staging is working correctly, you can swap staging with live and in theory there should be no down time at all for the live slot.
That said, it depends on your budget. An Azure app with slots costs more than one without. If you want cheap and don't care about things like custom domains, use two free websites and that will solve your problem. Otherwise, and to keep everything together, I would probably just combine them and pay the higher price, but it depends on how your app works too.
If price isn't an issue, I would ask myself: "How difficult is it going to be to query the back end from the front end?" If it's going to be a pain and make what would be a simple app overly complex, you might want to combine the apps and use deloyment slots (see here for more info: https://azure.microsoft.com/en-us/documentation/articles/web-sites-staged-publishing/)
I’m working on a SPA built with DurandalJS, which is hosted on app.example.com. The API is hosted on api.example.com. We're now planning to add backend administration for ourselves, to overlook our clients. We'll each have an account and we'll be able to manage our client stuff.
What we're trying to figure out is where to host the backend.
If we keep it on the app subdomain, we'll only have to add a new role (admin) to the existing application, but this will allow regular users to log in ti the backend if our credentials are somehow leaked.
If we clone the existing application to admin.example.com, we'll always have to worry about the code being in sync, but it will be safer, because the admin subdomain will be closed to the public and the login for admins will require a different set of api and private keys.
How should we handle this? If we go with #2, are there better ways to share code between two apps without the extra headaches?
I personally like the second approach to go with different subdomains.
Duplicating the codebase is not really necessary since you can leverage the cool features RequireJS provides to map aliases to your modules. The importance here is that through extracting the business logic into modules you can serve different implementations.
I've created a small GitHub Repo called durandal-multisite to explain in detail how you would proceed.
The general idea is to:
Keep the same viewmodels/views
Extract businesslogic (should be anyway done to follow proper SoC)
Create 2 main.js implementations (frontend/backend)
The respective main.js setup a RequireJS map to the aliases requested by your viewmodels
which then deliver the concrete module implementation
I think you need to distinguish between creating subdomains that point to the same application or creating two separate applications.
I think in order to give a full answer, we need to identify some important aspects of your app first:
How is your authentication and authentication implemented? Is it part of the SPA? Is it before loading the SPA?
Will the code of the application be 100% the same as the admin application? What do you mean by keep in sync?
Making some assumptions I could give you some answer, it might not be accurate but could help you:
Subdomains are cool, you get some information upfront (which subdomain the user is trying to access) so you can qualify requests easily and determine some stuff before actually hitting the application server. However, I don't think your problem here is in which subdomain the application should live.
The first thing you need to answer is how you would qualify a user from being an Admin and a regular User. Obviously you should not rely on a subdomain to do so. Probably this logic would live in the login process based on some data (probably from a DB).
The next thing that you need to know is how your application changes depending on the role:
If your application will be 100% the same (same code) and will react dynamically based on the role that is logged, you don't really need anything special. All you need is make sure that your application is secure enough to not allow regular users to do admin stuff.
You could use the subdomain and some extra logic so only Admins can use the subdomain. However this is only some "sugar" security to make some separation. The app needs still needs to manage roles and permissions consistently.
-
If your application is not using the same codebase, you need to determine during the logging process in the web application which role is logging in and which SPA application it should send to the browser. To do so, you need a separate logging page or have a modular SPA that can load modules at dynamically.
Probably you would like to reuse some code between applications (admin & user facing apps). You will have some challenges reusing parts of the codebase.
You don't need to worry that much about permissions and roles in the user app but you need a secure logging process.
(Just a reminder) In any event, SPA's should contain logic to manage roles and permissions for the sake of consistency and to avoid user confusion. The main security management is in your API. The goal in any SPA that has authentication and authorization is to have a secure API behind.
Here's my current app structure:
/_css/
/_img/
/_js/
/app/
/lib/
index.html
After reading both answers, I ended up separating the apps while keeping them under the same roof.
/_css/
/_img/
/_js/
/app/
/backend/
/lib/
index-app.html
index-backend.html
This allows an easier way to manage and switch between the apps. I don't have to worry about keeping the css, the images and the libraries in sync or creating a new git repo for the backend.
I've also updated my Gruntfile.js to include a separate building process for the app and the backend. So locally the apps will be under the same /_js folder, but on the server, they'll be on separate domains.
I should have been more specific with my question and include the fact that my problem was more with how I can manage both apps locally, rather than on the server.
Thanks for the answers!
I have designed a meteor.js application and it works great on localhost and even when deployed to the internet. Now I want create a sign-up site that will spin up new instances of the application for each client who signs up on the back-end. Assuming a meteor.js application and python or javascript for the sign-up site, what high level steps need to be taken to implement this?
I am looking for a more correct and complete answer that takes the form of my poorly imagined version of this:
Use something like node or python to call a shell script that may or may not run as sudo
That script might create a new folder to hold instance specific stuff (like client files, config, and or that instances database).
The script or python code would deploy an instance of the application to that folder and on a specific port
Python might add configuration information to a tool like Pound to forward a subdomain to a port
Other things....!?
I don't really understand the high level steps that need to be taken here so if someone could provide those steps and maybe even some useful tools or tutorials for doing so I'd be extremely grateful.
I have a similar situation to you but ended up solving it in a completely different way. It is now available as a Meteor smart package:
https://github.com/mizzao/meteor-partitioner
The problem we share is that we wanted to write a meteor app as if only one client (or group of clients, in my case) exists, but that it needs to handle multiple sets of clients without them knowing about each other. I am doing the following:
Assume the Meteor app is programmed for just a single instance
Using a smart package, hook the collections on server (and possibly client) so that all operations are 'scoped' only to the instance of the user that is calling them. One way to do this is to automatically attach an 'instance' or 'group' field to each document that is being added.
Doing this correctly requires a lot of knowledge about the internals of Meteor, which I've been learning. However, this approach is a lot cleaner and less resource-intensive than trying to deploy multiple meteor apps at once. It means that you can still code the app as if only one client exists, instead of explicitly doing so for multiple clients. Additionally, it allows you to share resources between the instances that can be shared (i.e. static assets, shared state, etc.)
For more details and discussions, see:
https://groups.google.com/forum/#!topic/meteor-talk/8u2LVk8si_s
https://github.com/matb33/meteor-collection-hooks (the collection-hooks package; read issues for additional discussions)
Let me remark first that I think spinning up multiple instances of the same app is a bad design choice. If it is a stop gap measure, here's what I would suggest:
Create an archive that can be readily deployed. (Bundle the app, reinstall fibers if necessary, rezip). Deploy (unzip) the archive to a new folder when a new instance is created using a script.
Create a template of an init script and use forever or daemonize or jesus etc to start the site on reboot and keep the sites up during normal operation. See Meteor deploying to a VM by installing meteor or How does one start a node.js server as a daemon process? for examples. when a new instance is deployed populate the template with new values (i.e. port number, database name, folder). Copy the filled out template to init.d and link to the runlevel. Alternatively, create one script in init.d that executes other scripts to bring up the site.
Each instance should be listening to its own port, so you'll need a reverse proxy. AFAIK, Apache and Nginx require restarts when you change the configuration, so you'll probably want to look at Hipache https://github.com/dotcloud/hipache. Hipache uses redis to store the configuration information. Adding the new instance requires to add a key to redis. There is an experimental port of Hipache that brings the functionality to Nginx https://github.com/samalba/hipache-nginx
What about DNS updates? Once you create a new instance, do you need to add a new record to your DNS configuration?
I don't really have an answer to your question... but I just want to remind you of another potential problem that you may run into cuz I see you mentioned python, in other words, you may be running another web app on Apache/Nginx, etc... the problem is Meteor is not very friendly when it comes to co-exist with another http server, the project I'm working on was troubled by this issue and we had to move it to a stand alone server after days of hassle with guys from Meteor... I did not work on the issue, so I'm not able to give you more details, but I just looked up online and found something similar: https://serverfault.com/questions/424693/serving-meteor-on-main-domain-and-apache-on-subdomain-independently.
Just something to keep in mind...