I would like to retrieve table data from a REST backend server with angularjs. The data changes by the second. I would like to use angularjs to refresh the data as it changes in real-time. How can this be done? Do I force angularjs to make ajax calls at regular time intervals to refresh the data display?
Yeah with REST there's no other way then polling. To refresh the data itself in the browser view you can use $rootScope.$apply() in the service (I presume that you're using a service to get the data), first you have to inject the dependency of $rootScope of course.
EDIT
Actually you shouldn't use $rootScope.$apply(), because it can lead to ugly $digest in progress errors. Instead use
$timeout(function(){
// set data here...
})
A tip for improvement if you're not happy with the polling and if you have programmed the backend or are in position to change it, is:
Try to use WebSockets:
It gets rid of the polling, because then your browser can communicate directly to the server.
It has less overhead.
It is supported in the major browsers and you can use libraries with fallback mechanisms like SocketIO
The server can then always push the data right then, when it's available. The server-side implementation depends on the backend you are using, but most backend frameworks/servers nowadays also support websockets.
You have to either use the poll or push strategy. However, since you already know that the resource changes regularly, it should be enough to setup a recurring timeout so that your application polls the resource on the REST server every second. Once it is retrieved, AngularJS updates the view. So, yes you have to force AngularJS to make calls to the service.
Related
How do i call a NodeJS RESTful Web service automatically when a data is stored in Google Realtime Firebase? For example, once i stored user details to firebase my restful webservice should detect the data and perform changes on the data. My code works perfectly fine but my question is how do i make it perform automatically without me running the server remotely.
You can do it using two ways:
(1) Traditional approach:
you can implement polling, basically calling your rest API again & again at a
certain interval; get the data & check for change
(2) Websockets(I like it more):
You can also use WebSockets, basically, web sockets can maintain a live two way communication with the server
. Reference on how to use WebSockets can be found here
https://developer.mozilla.org/en-US/docs/Web/API/WebSocket
I think that you can use an event listener and depending on the event a specific part of your code will be executed. Check the documentation for what suits you https://firebase.google.com/docs/database/admin/retrieve-data
I'm looking for technique or skils to fix the ways for new web site.
This site show the read time data which located on server as file or data on memory.
I'll use Node.js for server-side. But I can't fix how to get the data and show that to web site user.
Because this data have to update per 1 second at least.
I think it is similar to the stock price page.
I know there are a lot of ways to access data like AJAX, Angular.js, Socket.io..
Also each has pros and cons.
Which platform or framework is good in this situation?
This ultimately depends on how much control you have over the server side. For data that needs to be refreshed every second, doing the polling on client side would place quite the load on the browser.
For instance, you could do it by simply using one of the many available frameworks to make http requests inside some form of interval. The downsides to this approach include:
the interval needs to be run in the background all the time while the user is on the page
the http request needs to be made for every interval to check if the data has changed
comparison of data also needs to be performed by the browser, which can be quite heavy at 1 sec intervals
If you have some server control, it would be advisable to poll the data source on the server, i.e. using a proxying microservice, and use the server to perform change checking and only send data to clients when it has changed.
You could use Websockets to communicate those changes via a "push" style message instead of making the client browser do the heavy lifting. The flow would go something like:
server starts polling when a new client starts listening on its socket
server makes http requests for each polling interval, runs comparison for each result
when result has changed, server broadcasts a socket message to all connected clients with new data
The main advantage to this is that all the client needs to do is "connect and listen". This even works with data sources you don't control – the server you provide can perform any data manipulation needed before it sends a message to the client, the source just needs to provide data when requested.
EDIT: just published a small library that accomplishes this goal: Mighty Polling ⚡️ Socket Server. Still young, examine for your use if using.
I'm currently experimenting with WebSockets in a bid to reduce / remove the need for constant AJAX requests in a potentially low bandwidth environment. All devices are WebSocket compliant so there's no issue there, and I'm trying to keep it to native PHP WebSockets, no node.js or other frameworks / libraries (Which so far has been fine).
What I'm looking to do is to decide how to go about notifying connected clients about an update to a database by another Client. The use case in question is a person pressing a button on their device, which then alerts that persons manager(s) to that press. So the two options I have though of are as follows:
1. Looping a Database Query (PHP)
My first thought was to insert a query into the WebSocket server that is effectively saying "Has the alert field changed? If so, notify the manager(s)". Whilst this is the most straightforward and sensible approach (That I can think of), it seems wasteful to have a PHP script designed to reduce strain on the server, that is now running a query every second, however, at least this would ensure that when a Database update is detected, the update is sent.
2. Sending a notification from the Client
Another thought I had, was that when the client updates the Database, they could in fact send a WebSocket notification themself. This has the advantage of reducing any intensive and looped queries, but also means that I'd need to have a WebSocket message being sent every time I want to change any data, such as:
$.post("AttemptDatabaseUpdate.php", {Data}).function(Result) // Don't worry about the semantics of this, it's not actual code
{
if(Result == "Successful")
{
SendWebSocketNotification(OtherData);
}
}
Maybe this is the best option, as it is the most efficient, but I worry that there is a chance the connection may drop between updating the Database, and sending the WebSocket notification, which may create a need for a fallback check in the PHP file, much like the one in the first solution, albeit at a longer interval (Say every 30 seconds).
3. MySQL Trigger?
This is purely a guess, but perhaps another option is to create a MySQL trigger, which can somehow notify the server.php file directly? I've no idea how this would work, and would hazard a guess that this may end up with the same or similar Query requirements as solution #1, but it's just a though...
Thank you in advance for your help :)
EDIT: Solution possibility 4
Another thought has just popped into my head in fact, whereby the PHP file used to update the database could in fact have a WebSocket message built into it. So that when the PHP file updates the database, the WebSocket server is notified via PHP, is this possible?
If you use websockets, you should use notifications from client. That's one of their main use cases.
If you're worried about inconsistencies due to connection dropping or something changing in-between, you could implement a system similar to HTTP ETags, where client would send a hash code that you can respond on server side if there is a conflict in updating.
Update: I guess I understood your initial issue a bit wrong. If I understand your use case correctly: you are sending database updates from a client and after that all connected clients need to be updated. In that case, I think server should send the update messages after DB updates have been done, so I agree with solution 4. I am assuming here that your websocket server is the same server running PHP and doing the DB updates.
However, depending on your use case, client should still send a hash value on the next request identifying its "view of the world", so you would not be doing identical updates multiple times if a connection gets broken.
Update 2: so it was now understood that you indeed use a separate, standalone websocket server. Basically you have two different web servers on the server side and are having an issue on how to communicate between the two. This is a real issue, and I'd recommend only using one server at a time - either take a look at using Apache websocket support (experimental and not really recommended) or migrating your php scripts to the websocket instance.
Neither PHP or Apache was really build with websockets in mind. It is quite easy to set up a standalone websocket server using only PHP, but it might not be so easy then to migrate the rest of the PHP stack to it if the code is relying on Apache/web server on. Apache websocket support also is hardly optimal. For a real websocket solution, unfortunately, best practice would be using a technology that is built for it from the ground up.
The better answer is to send notification through Server side when database is updated by PHP script, so that script have to add options of web sockets to directly send notification to all web socket clients registered.
User send content->Php script process content and save data according to true condition->check database is updated by checking return of mysql_query/other alternative->if true than use web-socket and send notification to all users
now this is more easy/handy/bandwidth saver.
I have a service method written in ASP.Net WebAPI :http://diningphilospher.azurewebsites.net/api/dining?i=12
and JavaScript client gets the response and visualizes it here
But the nature of Dining Philosophers problem is I never know when the Dead-lock or starvation will happen. So Instead of having a request/response I would like to stream the data through service method and client side JavaScript read the data I assume JSON asynchronously. Currently several post directs me towards changing the default buffer limit in WebAPI so you get a streaming like behavior.
what other(easy or efficient) ways exist to achieve this above behavior.
You can return PushStreamContent from ASP.NET Web API and use Server Sent Events (SSE) JavaScript API on the client side. Check out Push Content section in Henrik's blog. Also, see Strathweb. One thing I'm not sure about the latter implementation is the use of ConcurrentQueue. Henrik's implementation uses ConcurrentDictionary and that allows you to remove the StreamWriter object from the dictionary corresponding to the clients who drop out, which will be difficult to implement using ConcurrentQueue, in my opinion.
Also, Strathweb implementation uses KO. If you don't like to use KO, you don't have to. SSE JavaScript APIs have nothing to do with KO.
BTW, SSE is not supported in IE 9 or lesser.
Another thing to consider is the scale out option. Load balancing will be problematic, in the sense there is a chance that the load will not be uniformly distributed, since clients are tied to the server (or web role) they hit first.
I am developing a native iPhone app in Titanium.
Within this app I am using data from a remote API (which I have developed in Rails 3).
I want the user to cache the API data on their phones as much as possible.
What I need help with is the concept of caching. What is the best way of doing it? The nature of the data in the API is that it needs to be up to date. Because it is contact data that can change anytime.
I have no clue in how the cache process would work. If you someone can explain
the best way of managing a caching process for the API I would be more than happy!
I am using JSON and Javascript.
"The nature of the data in the API is that it needs to be up to date. Because it is contact data that can change anytime"
If that's true then it makes any kind of caching redundant, as you would need to compare the cache to live data to check for changes, thus making the cache itself pointless.
The only reason you may still want to cache the data is to have it available off-line. That being the case i would use an SQLite database, which is native to the iphone.
titanium-cache is clean code with a unit tests and provides some sample code in the readme. I integrated this with my own project in a matter of minutes and it worked nicely.
I think the type of cache it's application dependent.
You can cache data on:
client;
server;
other network element.
Critical point is refresh of data. A bad algorithm produce inconsistent data.
You can find interesting information on literature of distributed systems
Bye
A couple options here.
1) You can use ASIHTTPRequest and ignore cache headers to cache everything. When your app is being used, you can detect if the cache is being hit. If it is hit, you fire off a request to the server after the cache hit to request any new data. You can do this by appending a random URL param to the end of the URL since the cache keys off of the URL. If you have a good connection and new data, load it in. Otherwise do nothing and your user has the latest data when using the app under a good connection
2) Do most of #1 by always hitting the cache but instead of firing a non-cachable version of the same request to the server after hitting the cache, fire off a non-cacheable timestamp check to see if data was updated. If it has been, fire off the non-cachable full API request. If it hasn't or it fails, you can do nothing.