Is it possible to create a single Meteor application having multiple domains and displaying different views/layouts depending on that domain?
For example, i have a admin interface accessible on admin.myapp.com and the two domains storeX.com and storeY.com. Both domains should point to the data from admin.myapp.com but displaying the data (mostly) independently of each other.
This may not be totally updated to 2014 standards, but I did answer this question before:
How can Meteor handle multiple Virtual Hosts?
And with the same setup, you can use Passenger (for nginx OR apache2) for Meteor. Here's a complete tutorial for using Passenger with Meteor, but keep in mind you have to integrate the multiple virtualhosts/domains to this tutorial yourself.
Perhaps a better approach would be to use Meteor's Pub/Sub capabilities, rather than sharing a DB per say. It's entirely possible to publish and subscribe across meteor apps, or indeed any implementation using DDP.
http://docs.meteor.com/#/full/ddp_connect
You can use the partitioner to send different views of the data to different users based on the domain name they hit.
See How can I share MongoDB collections between Meteor apps? . Basically the idea is that you build two meteor apps which would share the mongodb and collection(s) data.
Related
I am building a website that will host multiple apps. Some apps can only be accessed by certain groups from a LDAP backend. To achieve this on a pure Django stack, I used
user_has_perm
#user_passes_test()
so that certain views can only be accessed by users who has permission to add items (I give the permission to add items to certain groups, they are automatically placed into a group from the LDAP)
However, I am now migrating to a React + Django stack. How can I achieve a similar effect, where you can only access some apps if you are in the correct LDAP group.
Note - I have a single react app frontend that that uses routers to get to different component(apps)
if you have hybrid architecture
the easiest solution is to do the same thing but with view for your react app
user_has_perm
#user_passes_test()
*view_that_renders_template_with_react*
if you have many pages and different pages with react router you can add an additional check if user is permitted to get to this page, you can add this information to the user or make request, if no do something, redirect or throw an Error like Error 403, also you can use native django-rest-framework permissions and check it during the requests, and do something if user has no permission. If you are not using react router, but you also have hybrid architecture (you connect react via django templates/static) , you can make many identical views, check for different things then define urls for them in django urls,
I'm not able to provide more clear explanation and attach code, because there is so much ways to build such app and each requires individual solution, so I just explained possible solutions, which to use and how, depends on your implementation and architechture you've chosen
So, I have been learning Vue.js as my first js framework for some time and after I made some simple SPA:s without much interactions with a server I started to wonder: What should a backend be like with Vue? For education purposes I gave it a try and came up with some pattern on my own and now I can't imagine anything else, maybe I got some wrong idea.
What I came up with: I made a simple API with PHP which was receiving requests from the frontend (Vue component methods reacting on UI events) and requesting data from the model or updating data through it.
There are a lot of different Backend Solutions and you should take what fits your websites purpose and also personal preference the best.
If Backend includes Hosting in your case then you basicly have the 2 big options:
a) A Server where you run it ex. via an Reverse Proxy (example: Digital Ocean)
b) A cloud computing Platform (example: AWS, Heroku, App Engine)
But you only need to host it that way if you actually run the app and retrieve dynamic updates on the page, new routes get added when you for example publish a new Post.
If that is not the case then a static hosting provider would be enough, there are 1000´s of them and they are pretty uncomplicated.
If you mean which Database to use, then it also comes down to preference, do you want a SQL Databse or a NoSQL Database like MongoDB? As a personal recommendation I would suggest you to use Firebase as your backend for your experimental app, the free plan is far than enough for testing purposes, you also have a smooth and easy to integrate Authentication System avaliable and you can also take quick advantage of things like Push Messages, Cloud Storage Buckets and more.
Note that Im not related to FB by any means and this is just a personal recommendation, but I feel like your Question is pretty opinion based so maybe be more specific about your goals or just comment down below if you got any more questions.
I have a bit of conceptual question regarding the structure of users and their documents.
Is it a good practice to give each user within CouchDB their own database which hold their document?
I have read that couchDB can handle thousands of Databases and that It is not that uncommon for each user to have their database.
Reason:
The reason for asking this question is that I am trying to create a system where a logged in user can only view their own document and can't view any other users document.
Any suggestions.
Thank you in advance.
It’s rather common scenario to create CouchDB bucket (DB) for each user. Although there are some drawbacks:
You must keep ddocs in sync in each user bucket, so deployment of ddoc changes across multiple buckets may become a real adventure.
If docs are shared between users in some way, you get doc and viewindex dupes in each bucket.
You must block _info requests to avoid user list leak (or you must name buckets using hashes).
In any case, you need some proxy in front of Couch to create and prepare a new bucket on user registration.
You better protect Couch from running out of capacity when it receives to many requests – it also requires proxy.
Per-doc read ACL can be implemented using _list functions, but this approach has some drawbacks and it also requires a proxy, at least a web-server, in front of CouchDB. See CouchDb read authentication using lists for more details.
Also you can try to play with CoverCouch which implements a full per-doc read ACL, keeping original CouchDB API untouched, but it’s in very early beta.
This is quite a common use case, especially in mobile environments, where the data for each user is synchronized to the device using one of the Android, iOS or JavaScript (pouchdb) libraries.
So in concept, this is fine but I would still recommend testing thoroughly before going into production.
Note that one downside of multiple databases is that you can't write queries that span multiple database. There are some workarounds though - for more information see Cloudant: Searching across databases.
Update 17 March 2017:
Please take a look at Cloudant Envoy for more information on this approach.
Database-per-user is a common pattern with CouchDB when there is a requirement for each application user to have their own set of documents which can be synced (e.g. to a mobile device or browser). On the surface, this is a good solution - Cloudant handles a large number of databases within a single installation very well. However ...
Source: https://github.com/cloudant-labs/envoy
The solution is as old as web applications - if you think of a mySQL database there is nothing in the database to stop user B viewing records belonging to user A - it is all coded in the application layer.
In CouchDB there is likewise no completely secure way to prevent user B from accessing documents written by user A. You would need to code this in your application layer just as before.
Provided you have a web application between CouchDB and the users you have no problem. The issue comes when you allow CouchDB to serve requests directly.
Using multiple database for multiple users have some important drawbacks:
queries over data in different databases are not possible with the native couchdb API. Analysis on your website overall status are quite impossible!
maintenance will soon becomes very hard: let's think of replicating/compacting thousands of database each time you want to perform a backup
It depends on your use case, but I think that a nice approach can be:
allow access only through virtual host. This can be achieved using a proxy or much more simply by using a couchdb hosting provider which lets you fine-tune your "domains->path" mapping
use design docs / couchapps, instead of direct document CRUD API, for read/write operations
2.1. using _rewrite handler to allow only valid requests: in this way you can instantly block access to sensible handlers like _all_docs, _all_dbs and others
2.2. using _list and _view handlers for read doc/role based ACLs as described in CouchDb read authentication using list
2.3. using _update handlers for write doc/role based ACLs
2.4. using authenticated rewriting rules for read/write role based ACL.
2.3. filtered _changes handler is another way of retrieving all user's data with read doc/role based ACL. Depending on your use case this can effectively simplify as much as possible your read API, letting you concentrate on your update API.
I’m working on a SPA built with DurandalJS, which is hosted on app.example.com. The API is hosted on api.example.com. We're now planning to add backend administration for ourselves, to overlook our clients. We'll each have an account and we'll be able to manage our client stuff.
What we're trying to figure out is where to host the backend.
If we keep it on the app subdomain, we'll only have to add a new role (admin) to the existing application, but this will allow regular users to log in ti the backend if our credentials are somehow leaked.
If we clone the existing application to admin.example.com, we'll always have to worry about the code being in sync, but it will be safer, because the admin subdomain will be closed to the public and the login for admins will require a different set of api and private keys.
How should we handle this? If we go with #2, are there better ways to share code between two apps without the extra headaches?
I personally like the second approach to go with different subdomains.
Duplicating the codebase is not really necessary since you can leverage the cool features RequireJS provides to map aliases to your modules. The importance here is that through extracting the business logic into modules you can serve different implementations.
I've created a small GitHub Repo called durandal-multisite to explain in detail how you would proceed.
The general idea is to:
Keep the same viewmodels/views
Extract businesslogic (should be anyway done to follow proper SoC)
Create 2 main.js implementations (frontend/backend)
The respective main.js setup a RequireJS map to the aliases requested by your viewmodels
which then deliver the concrete module implementation
I think you need to distinguish between creating subdomains that point to the same application or creating two separate applications.
I think in order to give a full answer, we need to identify some important aspects of your app first:
How is your authentication and authentication implemented? Is it part of the SPA? Is it before loading the SPA?
Will the code of the application be 100% the same as the admin application? What do you mean by keep in sync?
Making some assumptions I could give you some answer, it might not be accurate but could help you:
Subdomains are cool, you get some information upfront (which subdomain the user is trying to access) so you can qualify requests easily and determine some stuff before actually hitting the application server. However, I don't think your problem here is in which subdomain the application should live.
The first thing you need to answer is how you would qualify a user from being an Admin and a regular User. Obviously you should not rely on a subdomain to do so. Probably this logic would live in the login process based on some data (probably from a DB).
The next thing that you need to know is how your application changes depending on the role:
If your application will be 100% the same (same code) and will react dynamically based on the role that is logged, you don't really need anything special. All you need is make sure that your application is secure enough to not allow regular users to do admin stuff.
You could use the subdomain and some extra logic so only Admins can use the subdomain. However this is only some "sugar" security to make some separation. The app needs still needs to manage roles and permissions consistently.
-
If your application is not using the same codebase, you need to determine during the logging process in the web application which role is logging in and which SPA application it should send to the browser. To do so, you need a separate logging page or have a modular SPA that can load modules at dynamically.
Probably you would like to reuse some code between applications (admin & user facing apps). You will have some challenges reusing parts of the codebase.
You don't need to worry that much about permissions and roles in the user app but you need a secure logging process.
(Just a reminder) In any event, SPA's should contain logic to manage roles and permissions for the sake of consistency and to avoid user confusion. The main security management is in your API. The goal in any SPA that has authentication and authorization is to have a secure API behind.
Here's my current app structure:
/_css/
/_img/
/_js/
/app/
/lib/
index.html
After reading both answers, I ended up separating the apps while keeping them under the same roof.
/_css/
/_img/
/_js/
/app/
/backend/
/lib/
index-app.html
index-backend.html
This allows an easier way to manage and switch between the apps. I don't have to worry about keeping the css, the images and the libraries in sync or creating a new git repo for the backend.
I've also updated my Gruntfile.js to include a separate building process for the app and the backend. So locally the apps will be under the same /_js folder, but on the server, they'll be on separate domains.
I should have been more specific with my question and include the fact that my problem was more with how I can manage both apps locally, rather than on the server.
Thanks for the answers!
I'm building a relatively complex and data heavy web application in AngularJS. I'm planning to use php as a RESTful backend (with symfony2 and FOSRESTbundle). I have spent weeks looking around for different solutions to on/off line synchronization solutions and there seem to be many half solutions (see list below for some examples). But non of them seem to fit my situation perfectly. How do I go about deciding which strategy will suite me?
What issues that might determine “best practices” for building an on/off line synchronization system in AngularJS and symfony 2 needs some research, but on the top of my head I want to consider things like speed, ease of implementation, future proof (lasting solution), extensibility, resource usage/requirements on the client side, having multiple offline users editing the same data, how much and what type of data to store.
Some of my requirements that I'm presently aware of are:
The users will be offline often and then needs to synchronize (locally created) data with the database
Multiple users share some of the editable data (potential merging issues needs to be considered).
User's might be logged in from multiple devices at the same time.
Allowing large amount of data to be stored offline(up to a gigabyte)
I probably want the user to be able to decide what he wants to store locally.
Even if the user is online I probably want the user to be able to choose whether he uses all (backend) data or only what's available locally.
Some potential example solutions
PouchDB - Interesting strategies for synchronizing changes from multiple sources
Racer - Node lib for realtime sync, build on ShareJS
Meteor - DDP and strategies for sync
ShareJS - Node.js operational transformation, inspired by Google Wave
Restangular - Alternative to $resource
EmberData - EmberJS’s ORM-like data persistence library
ServiceWorker
IndexedDB Polyfill - Polyfill IndexedDB with browsers that support WebSQL (Safari)
BreezeJS
JayData
Loopback’s ORM
ActiveRecord
BackBone Models
lawnchair - Lightweight client-side DB lib from Brian Leroux
TogetherJS - Mozilla Labs’ multi-client state sync/collaboration lib.
localForage - Mozilla’s DOMStorage improvement library.
Orbit.js - Content synchronization library
(https://docs.google.com/document/d/1DMacL7iwjSMPP0ytZfugpU4v0PWUK0BT6lhyaVEmlBQ/edit#heading=h.864mpiz510wz)
Any help would be much appreciated :)
You seem to want a lot of stuff, the sync stuff is hard... I have a solution to some of this stuff in an OSS library I am developing. The idea is that it does versioning of local data, so you can figure out what has changed and therefore do meaningful sync, which also includes conflict resolution etc. This is sort-of the offline meteor as it is really tuned to offline use (for the London Underground where we have no mobile data signals).
I have also developed an eco system around it which includes a connection manager and server. The main project is at https://github.com/forbesmyester/SyncIt and is very well documented and tested. The test app for the ecosystem will be at https://github.com/forbesmyester/SyncItTodoMvc but I have yet to write virtually any docs for it.
It is currently using LocalStorage but will be easy to move to localForage as it actually is using a wrapper around localStorage to make it an async API... Another one for the list maybe?
To work offline with your requeriments I suggest to divide problem into two scenarios: content (html, js, css) and data (API REST).
The content
Will be stored offline by appcache for small apps or for advanced cases with the awesome serviceworkers. Chrome 40+.
The data
Require solve the storage and synchronization and It becames a more difficult problem.
I suggest a deep reading of the Differential Synchronization algorimth, and take next tips in consideration:
Frontend
Store the resource and shadow (using for example url as key) into the localstorage for small apps or into more advanced alternatives (pouchdb,indexdb,...). With the resource you could work offline and when needs synchronize with the server use jsonpath to get diffs between the resource-shadow and to send it to server the PATCH request.
Backend
At backend take in consideration storage the shadow copies into redis.
The two sides (Frontend/Backend) needs to identify the client node, to do so you could use x- syn-token at HTTP header (send it in all request of the client with angular interceptors).
https://www.firebase.com/
it's reliable and proven, and can be used as a backend and sync library for what you're after. but, it costs, and requires some integration coding.
https://goinstant.com/ is also a good hosted option.
In some of my apps, I prefer to have both: syncing db source AND another main database. (mogno/express, php/mysql, etc..). then each db handles what's its best with, and it's features (real-time vs. security, etc...). This is true regardless to sync-db provider (be it Racer or Firebase or GoInstant ...)
The app I am developing has many of the same requirements and is being built in AngularJS. In terms of future proofing, there are two main concerns that I have found, one is hacking attempts requiring encryption and possible use of one time keys and an backend key manager and the other is support for WebSQL being dropped by the standards consortium in preference to indesedDB. So finding an abstraction layer that can support both is important. The solution set I have come up with is fairly straight forward. Where offline data is is loaded first into the UI and a request goes out to the REST Server if in an online state. As for resolving data conflicts in a multi user environment, that becomes a business rule decision. My decision was to simplify the matter and not delve into data mergers but to use a microtime stamp comparison to determine which version should be kept and pushed out to clients. When in offline mode, store data as a dirty write and the push to server when returning to an online state.
Or use ydn-db, which I am evaluating now as it has built in support for AWS and Google cloud storage built in.
Another suggestion:
Yjs leverages an OT-like algorithm to share a wide range of supported data types, and you have the option to store the shared data in IndexedDB (so it is available for offline editing).