I’m working on a SPA built with DurandalJS, which is hosted on app.example.com. The API is hosted on api.example.com. We're now planning to add backend administration for ourselves, to overlook our clients. We'll each have an account and we'll be able to manage our client stuff.
What we're trying to figure out is where to host the backend.
If we keep it on the app subdomain, we'll only have to add a new role (admin) to the existing application, but this will allow regular users to log in ti the backend if our credentials are somehow leaked.
If we clone the existing application to admin.example.com, we'll always have to worry about the code being in sync, but it will be safer, because the admin subdomain will be closed to the public and the login for admins will require a different set of api and private keys.
How should we handle this? If we go with #2, are there better ways to share code between two apps without the extra headaches?
I personally like the second approach to go with different subdomains.
Duplicating the codebase is not really necessary since you can leverage the cool features RequireJS provides to map aliases to your modules. The importance here is that through extracting the business logic into modules you can serve different implementations.
I've created a small GitHub Repo called durandal-multisite to explain in detail how you would proceed.
The general idea is to:
Keep the same viewmodels/views
Extract businesslogic (should be anyway done to follow proper SoC)
Create 2 main.js implementations (frontend/backend)
The respective main.js setup a RequireJS map to the aliases requested by your viewmodels
which then deliver the concrete module implementation
I think you need to distinguish between creating subdomains that point to the same application or creating two separate applications.
I think in order to give a full answer, we need to identify some important aspects of your app first:
How is your authentication and authentication implemented? Is it part of the SPA? Is it before loading the SPA?
Will the code of the application be 100% the same as the admin application? What do you mean by keep in sync?
Making some assumptions I could give you some answer, it might not be accurate but could help you:
Subdomains are cool, you get some information upfront (which subdomain the user is trying to access) so you can qualify requests easily and determine some stuff before actually hitting the application server. However, I don't think your problem here is in which subdomain the application should live.
The first thing you need to answer is how you would qualify a user from being an Admin and a regular User. Obviously you should not rely on a subdomain to do so. Probably this logic would live in the login process based on some data (probably from a DB).
The next thing that you need to know is how your application changes depending on the role:
If your application will be 100% the same (same code) and will react dynamically based on the role that is logged, you don't really need anything special. All you need is make sure that your application is secure enough to not allow regular users to do admin stuff.
You could use the subdomain and some extra logic so only Admins can use the subdomain. However this is only some "sugar" security to make some separation. The app needs still needs to manage roles and permissions consistently.
-
If your application is not using the same codebase, you need to determine during the logging process in the web application which role is logging in and which SPA application it should send to the browser. To do so, you need a separate logging page or have a modular SPA that can load modules at dynamically.
Probably you would like to reuse some code between applications (admin & user facing apps). You will have some challenges reusing parts of the codebase.
You don't need to worry that much about permissions and roles in the user app but you need a secure logging process.
(Just a reminder) In any event, SPA's should contain logic to manage roles and permissions for the sake of consistency and to avoid user confusion. The main security management is in your API. The goal in any SPA that has authentication and authorization is to have a secure API behind.
Here's my current app structure:
/_css/
/_img/
/_js/
/app/
/lib/
index.html
After reading both answers, I ended up separating the apps while keeping them under the same roof.
/_css/
/_img/
/_js/
/app/
/backend/
/lib/
index-app.html
index-backend.html
This allows an easier way to manage and switch between the apps. I don't have to worry about keeping the css, the images and the libraries in sync or creating a new git repo for the backend.
I've also updated my Gruntfile.js to include a separate building process for the app and the backend. So locally the apps will be under the same /_js folder, but on the server, they'll be on separate domains.
I should have been more specific with my question and include the fact that my problem was more with how I can manage both apps locally, rather than on the server.
Thanks for the answers!
Related
I'm creating a statistics web page which can see sensitive information.
The webpage has a sort of table which has massive data in it, editable and stored in Server's database. But It needs to be hidden before the user got proper authentications(Like log-in). (Table itself and it's code too). But I found that most of the questions in stack overflow say it is basically impossible. But when I see lots of well-known websites, it seems they are hiding them well. So I guess there are some solutions to the problem.
At first, I build a full-stack of React - Express - Node - MariaDB toolchain.
The react client is responsible for rendering contents of a webpage and editable tables and request for submitting edited content.
The node with express are responsible for retrieving data from DB, updating DB (Provides data to manipulate from client-side -- that's all)
It comes to a problem when I'm considering security on client-side code. I want to hide all content of the page (not just data from the server, but also its logic and features)
To achieving my goals, I consider several things, but I doubt if it is right and working well if I create.
Using Serverside rendering -- Cannot use due to performance reason and lack of resources available
Serverside rendering can hide logic from the user cause it omits the only HTML from the server and all actions are submitted to the server and the server handle the actions and provide its result.
So I can provide only the login page at first, and if login is successful, I can send the rest of HTML and it's logics from the server.
The problem is that my content in the webpage is massive and will be interacted with the user very often, and applying virtualization on my table (by performance reason), it's data and rendering logic should be handled by the web browser.
Combining SSR and Client-Side Rendering
My inspection for this is not sure, I doubt if it is possible.
Use SSR for hiding content of the site from unauthorized users, and if authorized, the web browser renders its full content on demand. (Code and logics should be hidden before authorization, the unauthorized user only can see the login page)
Is it possible to do it?
Get code on demand.
Also my inspection, this is what I am looking for. But I strongly doubt if it is possible.
Workflow is like below
If a user is not logged in:: User only can see the login page and its code
If the user is logged in:: User can see features of the page like management, statistics, etc.
If the user approaches specific features:: Rendering logic and HTTP request interface is downloaded from the server (OR less-performance hindering logic or else...) and it renders what users want to see and do.
It is okay not to find ways from the above idea. Can you provide some outlines for implement such kind of web page? I'm quite new to Web Programming, so I cannot find proper ways. I want to know how can I achieve this with what kinds of solutions, library, structure.
What lib or package should I use for this?
How can I implement this?
OR can you describe to me how modern websites achieve this? (I think the SAP system quite resembles with what I wanna achieve)
Foreword
Security is a complex topic, in which it is not possible to reach 0 threat. I'll try to craft an answer that could fullfil what you are looking for.
Back end: Token, credentials, authentication
So, you are currently using Express for your back end, hence the need to sort of protect the access from this part, many solution exist, I favor the token authentication, but you can do something with username/password (or this) to let the users access the back end.
From what you are describing you would use some sort of API (REST, GraphQL etc.) to connect to the back-end and make your queries (fetch, cross-fetch, apollo-link etc.) and add the token to the call to the back end in the headers usually.
If a user doesn't have the proper token, they have no data. Many sites use that method to block the consumption of data from the users (e.g. Twitter, Instagram). This should cover the security of the data for your back end, and no code is exposed.
Front-end: WebPack and application code splitting
Now the tricky part, so you want the client side not to have access to all the front-end at once but in several parts. This has 2 caveats:
It will be a bit slower than in normal use
Once the client logged in once, he will have access to the application
The only workaround I see in this situation is to use only server side rendering, if you want to limit to the bare minimum the amount of data the client has on your front end. Granted it is slow, but if you want maximum protection that is the only solution.
Yet, if you want to still keep some interactions and have a faster front end, while keeping a bit of security, you could use some code splitting with WebPack. I am not familiar with C so I can't say, but the Multiple page application of WebPack, as I was mentionning in the comment, should give you a good start to build something more secure.
First, you would have for example 2 html files for entering the front end: one with the login and one with the application. The login contains only the Javascript modules that are for entering the application and shouldn't load the other Javascript modules.
All in all, entrypoints are the way you can enter the application, this is a very broad topic that I can't cover in this answer, but I would recommend you to follow WebPack's tutorial and find out how you can work this out.
I recommend the part on code splitting, but all the tutorial is worth having a look.
Second, you will have to tweak the optimisation module. It is usually a module that tries to reduce the size of the application by merging methods that are used by different parts or that are redundant: you don't want this.
In your case, you don't want un-authenticated users to have access. So you would have to probably change things there (as well another broad topic to be covered in a single answer, since you would have to decide what you keep for optimisation and what you remove for security), but here is the link to the optimisation module and a heads up, you will have to modify the SplitChunksPlugin not to do this optimisation.
I hope this helps, there are many solutions are hand and this is not a comprehensive guide but that should give you enough materials to get to what you need.
So, I have been learning Vue.js as my first js framework for some time and after I made some simple SPA:s without much interactions with a server I started to wonder: What should a backend be like with Vue? For education purposes I gave it a try and came up with some pattern on my own and now I can't imagine anything else, maybe I got some wrong idea.
What I came up with: I made a simple API with PHP which was receiving requests from the frontend (Vue component methods reacting on UI events) and requesting data from the model or updating data through it.
There are a lot of different Backend Solutions and you should take what fits your websites purpose and also personal preference the best.
If Backend includes Hosting in your case then you basicly have the 2 big options:
a) A Server where you run it ex. via an Reverse Proxy (example: Digital Ocean)
b) A cloud computing Platform (example: AWS, Heroku, App Engine)
But you only need to host it that way if you actually run the app and retrieve dynamic updates on the page, new routes get added when you for example publish a new Post.
If that is not the case then a static hosting provider would be enough, there are 1000´s of them and they are pretty uncomplicated.
If you mean which Database to use, then it also comes down to preference, do you want a SQL Databse or a NoSQL Database like MongoDB? As a personal recommendation I would suggest you to use Firebase as your backend for your experimental app, the free plan is far than enough for testing purposes, you also have a smooth and easy to integrate Authentication System avaliable and you can also take quick advantage of things like Push Messages, Cloud Storage Buckets and more.
Note that Im not related to FB by any means and this is just a personal recommendation, but I feel like your Question is pretty opinion based so maybe be more specific about your goals or just comment down below if you got any more questions.
I'm starting a project that basically is a single-page app that downloads and shows a bunch of stats (using d3.js). The data layer is Mongo-powered, served through a RESTful API, and the client app will be coded in Ember.js. We want all data to be exchanged through the API, since we also have some mobile apps in the back burner that will hook to the same API.
I'm debating on whether write the API (using Express.js or other server-side MVC framework), or just serve the API use Deployd and not using a server-side framework at all, besides Deployd. I'll provide some hints about the project characteristics:
The main feature is basically a dashboard that shows aggregated stats that are already computed and stored in the Mongo database.
User interaction is minimal, enough only to allow users to customize their dashboards, but users never upload data (other that customization preferences).
Most of the app is a lot of d3.js to create and render a bunch of graphs, which can customized in many ways.
It requires a very rich and responsive user interface.
I proposed skipping completely the server-side framework, and simply go with a bunch of static HTML+CSS and do all the heavy lifting with a client-side MVC such as Ember.js. Since all data download and upload can be handled by Deployd, a pure static site would load much faster and is also easier to scale. Also, (I think) all user-related data and validation can be done with Deployd itself.
The thing is, some of my colleagues had a heart-stroke when I mentioned this idea. So I'd like a reality check: do I really need a server-side framework besides Deployd to cope with problems I cannot foresee yet? Are the benefits of having a pure static site a good enough tradeoff versus having, say, Express.js just in case?
I haven't worked with Deployd before, but from a quick skim of its docs, it is a server-side framework. It accepts requests and respond with json. It's just oriented to APIs and json and neglects html, unlike, say, default Ruby on Rails.
The main issues I can think of that might arise due to a lack of a traditional server-side framework are things like auth, CORS, and XSS/CSRF/other common security issues. You could cater for this through Deployd if it's built in or easily added, but that may be difficult.
Looking further into Deployd's docs, I see there's a guide for users and CORS. I can't find anything about XSS or CSRF.
I'm currently building a small Node.js Website and I want to host it on Azure. I wanted to ask whether I should separate the Front- and Backend parts of the app into two different Azure Websites, or just put the whole app into one Azure Website slot.
The reason I think separating into two parts might make sense is because I want to access the Backend part not only from the corresponding website but also from other parts (e.g. a mobile app). And I think that in this setup updating the Frontend part would also mean downtime for the Backend API.
On the other hand, two websites make it harder to query the Backend from the Frontend (hardcoded URLs, socket.io can't just easily be setup anymore, etc.)
What would you recommnd in my case?
Thanks a lot.
ExpressJS
Socket.io
some Database
jquery on client / cheerio on serverSide (DOM parsing)
Don't have any experience with Azure.. Switch to Linux!!
Have access to your apps, data, should be very important and one of things the app is built around..
Azure websites supports creating staged deployments for you site http://azure.microsoft.com/en-us/documentation/articles/web-sites-staged-publishing/
This means that you don't need to worry about downtime for your backend when you're updating your front end, since you can update the front end in the staging environment, then when it's ready to be deployed you just swap the staging and production environments and all requests get immediately redirected to the updated version of your site with no downtime.
Zain is correct in that you can create slots in your Azure website and swap them to prevent downtime. For example, you can create a staging slot and a production slot and push your updates to staging. Once satisfied that staging is working correctly, you can swap staging with live and in theory there should be no down time at all for the live slot.
That said, it depends on your budget. An Azure app with slots costs more than one without. If you want cheap and don't care about things like custom domains, use two free websites and that will solve your problem. Otherwise, and to keep everything together, I would probably just combine them and pay the higher price, but it depends on how your app works too.
If price isn't an issue, I would ask myself: "How difficult is it going to be to query the back end from the front end?" If it's going to be a pain and make what would be a simple app overly complex, you might want to combine the apps and use deloyment slots (see here for more info: https://azure.microsoft.com/en-us/documentation/articles/web-sites-staged-publishing/)
I have designed a meteor.js application and it works great on localhost and even when deployed to the internet. Now I want create a sign-up site that will spin up new instances of the application for each client who signs up on the back-end. Assuming a meteor.js application and python or javascript for the sign-up site, what high level steps need to be taken to implement this?
I am looking for a more correct and complete answer that takes the form of my poorly imagined version of this:
Use something like node or python to call a shell script that may or may not run as sudo
That script might create a new folder to hold instance specific stuff (like client files, config, and or that instances database).
The script or python code would deploy an instance of the application to that folder and on a specific port
Python might add configuration information to a tool like Pound to forward a subdomain to a port
Other things....!?
I don't really understand the high level steps that need to be taken here so if someone could provide those steps and maybe even some useful tools or tutorials for doing so I'd be extremely grateful.
I have a similar situation to you but ended up solving it in a completely different way. It is now available as a Meteor smart package:
https://github.com/mizzao/meteor-partitioner
The problem we share is that we wanted to write a meteor app as if only one client (or group of clients, in my case) exists, but that it needs to handle multiple sets of clients without them knowing about each other. I am doing the following:
Assume the Meteor app is programmed for just a single instance
Using a smart package, hook the collections on server (and possibly client) so that all operations are 'scoped' only to the instance of the user that is calling them. One way to do this is to automatically attach an 'instance' or 'group' field to each document that is being added.
Doing this correctly requires a lot of knowledge about the internals of Meteor, which I've been learning. However, this approach is a lot cleaner and less resource-intensive than trying to deploy multiple meteor apps at once. It means that you can still code the app as if only one client exists, instead of explicitly doing so for multiple clients. Additionally, it allows you to share resources between the instances that can be shared (i.e. static assets, shared state, etc.)
For more details and discussions, see:
https://groups.google.com/forum/#!topic/meteor-talk/8u2LVk8si_s
https://github.com/matb33/meteor-collection-hooks (the collection-hooks package; read issues for additional discussions)
Let me remark first that I think spinning up multiple instances of the same app is a bad design choice. If it is a stop gap measure, here's what I would suggest:
Create an archive that can be readily deployed. (Bundle the app, reinstall fibers if necessary, rezip). Deploy (unzip) the archive to a new folder when a new instance is created using a script.
Create a template of an init script and use forever or daemonize or jesus etc to start the site on reboot and keep the sites up during normal operation. See Meteor deploying to a VM by installing meteor or How does one start a node.js server as a daemon process? for examples. when a new instance is deployed populate the template with new values (i.e. port number, database name, folder). Copy the filled out template to init.d and link to the runlevel. Alternatively, create one script in init.d that executes other scripts to bring up the site.
Each instance should be listening to its own port, so you'll need a reverse proxy. AFAIK, Apache and Nginx require restarts when you change the configuration, so you'll probably want to look at Hipache https://github.com/dotcloud/hipache. Hipache uses redis to store the configuration information. Adding the new instance requires to add a key to redis. There is an experimental port of Hipache that brings the functionality to Nginx https://github.com/samalba/hipache-nginx
What about DNS updates? Once you create a new instance, do you need to add a new record to your DNS configuration?
I don't really have an answer to your question... but I just want to remind you of another potential problem that you may run into cuz I see you mentioned python, in other words, you may be running another web app on Apache/Nginx, etc... the problem is Meteor is not very friendly when it comes to co-exist with another http server, the project I'm working on was troubled by this issue and we had to move it to a stand alone server after days of hassle with guys from Meteor... I did not work on the issue, so I'm not able to give you more details, but I just looked up online and found something similar: https://serverfault.com/questions/424693/serving-meteor-on-main-domain-and-apache-on-subdomain-independently.
Just something to keep in mind...