A chat application can be implemented using a database (browser send a request conterminously with a particular period and get data from a table which keeps messages).
I want to know that, is there a way to implement a chat application using Ajax and jsp/servlets in HTTP and no database connection ? I know jsp,servlets. How can session,request,responses be handled internally in a jsp/servlet application ?
If you want the non-production, educational version you can use Application Scope:
You can have an application scoped variable holding the chat list
E.g. use <jsp:useBean scope="application"> (one instance per application)
And as long as you have thread safety goggles, and you synchronize where needed, you are fine
But as mentioned, try to check node.js, it seems like the natural candidate for that
Edit:
Note that the application context is per VM, e.g. not the most scaling approach
You can use also ServletContext.setAttribute (same syncronization and scaling issues)
A database is just a glorified file. If your data is simple enough and you don't want to deal with databases just write to a file?
If you are a java guy, what you need seems to me like a good fit with spire.io, a service that allows you to build server-less, database-less applications with a java client.
Related
I've read a few StackOverflow posts related to this subject but I can't find anything specifically helps me in my scenario.
We have multiple monitoring instances within our network, monitoring different environments (Nagios, Icinga, more...). Currently I have a poller script written in PHP which runs every minute via cron, it asks the instance to return all of its problems in JSON, the script then interprets this and pushes it in to a MySQL database.
There is then an 'overview' page which simply reads the database and does some formatting. There's a bit of AJAX involved, every X seconds (currently use 30) it checks for changes (PHP script call) and if there are changes it requests them via AJAX and updates the page.
There's a few other little bits too (click a problem, another AJAX request goes off to fetch problem details to display in a modal etc).
I've always been a PHP/MySQL dev, so the above methodology seemed logical to me and was quick/easy to write, and it works 'ok'. However, the problems are: database constantly being polled by many users, mesh of javascript on the front end doing half the logic and PHP on the back doing the other half.
Would this use case benefit from switching to NodeJS? I've done a bit of Node.JS before but nothing like this. Can I subscribe to MySQL updates? Or trigger them when a 'data fetcher' pushes data in to the database? I've always been a bit confused as I use PHP to create data and javascript to 'draw' the page, is there still a split of NodeJS doing logic and front end javascript creating all the elements, or does NodeJS do all of this now? Sorry for the lack of knowledge in this area...
This is definitely an area where Node could offer improvements.
The short version: with websockets in the front-end and regular sockets or an API on the back-end you can eliminate the polling for new data across the board.
The long version:
Front-end:
You can remove all need for polling scripts by implementing websockets. That way, as soon as new data arrives on the server, you can broadcast it to all connected clients. I would advise Socket.io or the Primus websocket wrapper. Both are very easy to implement and incredibly powerful for what you want to achieve.
All data processing logic should happen on the server. The data is then sent to the client and should be rendered on the existing page, and that is basically the only logic the client should contain. There are some frameworks that do all of this for you (e.g. Sails) but I don't have experience with any of those frameworks, since they require you to write your entire app according to their rules, which I personally don't like (but I know a lot of developers do).
If you want to render the data in the client without a huge framework, I highly recommend the lightweight but incredibly useful Transparency rendering library. Using this, you can format a Javascript object on the server using Node, JSONify it, send it to the client, and then all the client would have to do is de-JSONify it and call Transparency's .render.
Back-end:
This one depends on how much control you have over the behaviour of the instances you need to check. I assume you have some control, since you can get all their data in a nice JSON format. So, there are multiple options.
You can keep polling every so often. This is the easiest solution since it requires no change to the external services. The Javascript setInterval function is very useful here. Depending on how you connect with the instances, you might be able to use a module like Request to do the actual request, so that takes out a bunch more of the heavy lifting.
The benefit of implementing the polling in your Node app as well, is that you will receive the data in your Node app and that way you can immediately broadcast it to the clients, even before inserting it into a database. This will greatly reduce the number of queries on your database.
An alternative to polling would be to set up a simple Express-based API where the applications can post their 'problems', as you call them. This way your application will get notified the moment a problem occurs, and combined with the websockets connection to the client this would result in practically real-time updates.
To be more redundant, you would have a polling timer alongside the API, so that you can check the instances in case there's something wrong that causes them to not send over any more data.
An alternative to the more high-level API would be to just use direct socket communication, which is basically the same approach only using a different set of functions.
Lastly, you could also keep the PHP-based polling script. This would be the most efficient solution since you wouldn't go and replace everything. Then from the Node app that's connected to the clients with websockets, you could set an interval to query the database every so often and broadcast the updates. This will still greatly reduce the number of queries, since no matter how many clients are connected there will only be one query, the response of which then gets sent to all connected clients.
I hope my post has give you some ideas of how you could implement your application using Node. Keep in mind though that I am just one developer, this is how I would approach building your application in Node. There will definitely be others who have different opinions.
I get the general idea at this point, angular.js is client-side, so any attempts to do database communication is done via initiating get/post requests to a server-side script on the server (via node, php, asp.net, whatever you're using)...
Only thing I haven't been able to determine is what's the proper practice for this in both conventions/security : do you make specialized pages for many of your particular queries, or 1 to a few general purpose pages that run whatever passed in as parameters. That latter option seems like a security nightmare but at the same time making a page for each table's select,insert,update, etc also seems nonviable.
To be clear and try and focus this back to a single question, it feels like I'm missing a concept here. How do you structure the database calls for an angular.js application?
From a security standpoint it isn't very different than a traditional web app. Your web server sends and receives json (most likely) instead of html. This means using something like rails-api instead of full rails. It's best to think of your Angular app as completely disconnected from your web server like an Android or iOS app is.
You might use token based authentication instead of cookies (nothing would preclude you from using token based auth in a traditional web app but I wouldn't say it's commonplace in traditional web apps). Other than that any concepts that apply to securing a traditional web application apply to securing an API.
What database do you use? Structure depends on the app you're building and if DB is relational or not. It's a strategic question, for example it may be better to have nested documents, or not.
I'm developing an app for Firefox OS and I need to retrieve/sent data from/to my DB. I also need to use this data in my logic implementation which is in JS.
I've been told that I cannot implement PHP in Firefox OS, so is there any other way to retrieve the data and use it?
PS: This is my first app that I'm developing, so my programming skills are kind of rough.
You can use a local database in JS, e.g. PouchDB, TaffyDB, PersistenceJS, LokiJS or jStorage.
You can also save data to a backend server e.g. Parse or Firebase, using their APIs.
Or you can deploy your own backend storage and save data to it using REST.
You should hold on the basic communication paradigms when sending/receiving data from/to a DB. In your case you need to pass data to a DB via web and application.
Never, ever let an app communicate with your DB directly!
So what you need to do first is to implement a wrapper application to give controlled access to your DB. Thats for example often done in PHP. Your PHP application then offers the interfaces by which external applications (like your FFOS app) can communicate with the DB.
Since this goes to very basic programming knowledge, please give an idea of how much you know about programming at all. I then consider offering further details.
It might be a bit harder to do than you expect but it can be easier than you think. Using mysql as a backend has serious implication. For example, mysql doesn't provide any http interfaces as far as I know. In other words, for most SQL based databases, you'll have to use some kind of middleware to connect your application to the database.
Usually the middleware is a server that publish some kind of http api probably in a rest way or even rpc such as JSONrpc. The language in which you'll write the middleware doesn't really matter. The serious problem you'll face with such variant is to restrict data. Prevent other users to access data to which they shouldn't have access.
There is also an other variant, I'd say if you want to have a database + synchronization on the server. CouchDB + PouchDB gives you that for free. I mean it's really easy to setup but you'll have to redesign some part of your application. If your application does a lot of data changes it might end up filling your disks but if you're just starting, it's possible that this setup will be more than enough.
I have a bit of conceptual question regarding the structure of users and their documents.
Is it a good practice to give each user within CouchDB their own database which hold their document?
I have read that couchDB can handle thousands of Databases and that It is not that uncommon for each user to have their database.
Reason:
The reason for asking this question is that I am trying to create a system where a logged in user can only view their own document and can't view any other users document.
Any suggestions.
Thank you in advance.
It’s rather common scenario to create CouchDB bucket (DB) for each user. Although there are some drawbacks:
You must keep ddocs in sync in each user bucket, so deployment of ddoc changes across multiple buckets may become a real adventure.
If docs are shared between users in some way, you get doc and viewindex dupes in each bucket.
You must block _info requests to avoid user list leak (or you must name buckets using hashes).
In any case, you need some proxy in front of Couch to create and prepare a new bucket on user registration.
You better protect Couch from running out of capacity when it receives to many requests – it also requires proxy.
Per-doc read ACL can be implemented using _list functions, but this approach has some drawbacks and it also requires a proxy, at least a web-server, in front of CouchDB. See CouchDb read authentication using lists for more details.
Also you can try to play with CoverCouch which implements a full per-doc read ACL, keeping original CouchDB API untouched, but it’s in very early beta.
This is quite a common use case, especially in mobile environments, where the data for each user is synchronized to the device using one of the Android, iOS or JavaScript (pouchdb) libraries.
So in concept, this is fine but I would still recommend testing thoroughly before going into production.
Note that one downside of multiple databases is that you can't write queries that span multiple database. There are some workarounds though - for more information see Cloudant: Searching across databases.
Update 17 March 2017:
Please take a look at Cloudant Envoy for more information on this approach.
Database-per-user is a common pattern with CouchDB when there is a requirement for each application user to have their own set of documents which can be synced (e.g. to a mobile device or browser). On the surface, this is a good solution - Cloudant handles a large number of databases within a single installation very well. However ...
Source: https://github.com/cloudant-labs/envoy
The solution is as old as web applications - if you think of a mySQL database there is nothing in the database to stop user B viewing records belonging to user A - it is all coded in the application layer.
In CouchDB there is likewise no completely secure way to prevent user B from accessing documents written by user A. You would need to code this in your application layer just as before.
Provided you have a web application between CouchDB and the users you have no problem. The issue comes when you allow CouchDB to serve requests directly.
Using multiple database for multiple users have some important drawbacks:
queries over data in different databases are not possible with the native couchdb API. Analysis on your website overall status are quite impossible!
maintenance will soon becomes very hard: let's think of replicating/compacting thousands of database each time you want to perform a backup
It depends on your use case, but I think that a nice approach can be:
allow access only through virtual host. This can be achieved using a proxy or much more simply by using a couchdb hosting provider which lets you fine-tune your "domains->path" mapping
use design docs / couchapps, instead of direct document CRUD API, for read/write operations
2.1. using _rewrite handler to allow only valid requests: in this way you can instantly block access to sensible handlers like _all_docs, _all_dbs and others
2.2. using _list and _view handlers for read doc/role based ACLs as described in CouchDb read authentication using list
2.3. using _update handlers for write doc/role based ACLs
2.4. using authenticated rewriting rules for read/write role based ACL.
2.3. filtered _changes handler is another way of retrieving all user's data with read doc/role based ACL. Depending on your use case this can effectively simplify as much as possible your read API, letting you concentrate on your update API.
I'm a Meteor newbie. I came across the Meteor Streams package which allows for "Real Time messaging for Meteor". Here's what it can do:
Meteor Stream is a distributed EventEmitter across meteor. It can be managed with filters and has a good security model (Inherited from existing meteor security model). You can create as many as streams you want, and it is independent from mongo.
With Meteor Streams, you can communicate between
client to clients
server to clients
client to server
server to servers
There's an example of it being used in a realtime blackboard where users can draw together. Being a Meteor newbie, out of ignorance, I ask, what is the difference between using something like this and just updating a session ie Session.set, Session.get. As I've seen session being used, two browsers can be open and updated with the same information with Session.set. So in an environment where two people are drawing, why can't it just be done using Session setting vs Meteor collections or Streams? what is it I'm not understanding about Session setting? I believe I'm probably wrong thinking session setting could be used instead, I just would like to know why. It will help me to understand Sessions in Meteor and the Meteor Streams package.
A Session variable is a quick variable created so you can reactively change stuff in templates.
E.g you have this in your template:
<template name="hello">
{{message}}
</template>
With it a template helper
Template.hello.message = function() { return Session.get("message") }
If you do something like Session.set("message", "Hi there"), the html will say Hi there. The idea is that you can easily change your HTML using this. Its a type of one way data binding.
A Meteor stream helps you communicate between the browser and server (or other combinations between servers and clients) so you can send messages to and fro, but it wont help you change the HTML.
Likewise the Session wont help you communicate between the browser and server, it can help change HTML when you have a result in your events or pass data reactively between your javascript code and the html that the user sees.
With the blackboard example, you can share the data the other users have drawn, but it wont help you draw onto your blackboard. (In the case of blackboard you could use streams because you're updating a canvas with javascript, so you don't need a session). You cant use Session on its own (or dont need to since its a canvas) - but you need Meteor streams to communicate to the other users.
You could use stuff like JQuery to update your html too! Using a Session is by far the easiest though, because you can use it all over and only have to update one thing for all the rest to change.