Webpage showing data - Data updated meanwhile - javascript

I have an ASP.NET MVC 4 website and I have a regular page showing some data. I want to add a similar feature like SO has, which notifies the user that another user has made some changes (database record update) since the page was opened.
What's the best practice to achieve it? I don't know if the most common approach is to use timers, or if is there any other option like listeners.
Thanks

Not too familiar with ASP.NET MVC 4 so this answer is from an design PoV.
Basically there are 2 solutions:
Have the server inform (all) clients that the data has changed (reverse AJAX, Listeners/Observers)
Have the client regularly ask the server if the data has changed (Timer)
in 99.9% of the cases you want to pick option 2. Keeping track of all clients and managing that list creates too many problems worth the effort. while receiving an additional request every x seconds hardly ever kills you (as long as x is proportional to your server and customer base).

Related

How to keep a list of items updated on a website using Javascript?

Coming from Python/Java/PHP, I'm now building a website. On it I want a list of items to be updated in near-realtime: if items (server side) get added to or deleted from the list, this should be updated on the webpage. I made a simple API call which I now poll every second to update the list using jQuery. Because I need some more lists to be kept updated on the same page I'm afraid this will turn into more than 10 server calls per second from every single open browser, even if nothing gets updated.
This seems not like the logical way to do it, but I don't really know how else to do it. I looked at Meteor, but since the webpage I'm building is part of a bigger system I'm rather restricted in my choices of technology (basic LAMP setup).
Could anybody enlighten me with a tip from the world of real-time websites on how to efficiently keep a list updated?
You can use WebSocket(https://code.google.com/p/phpwebsocket/ ) technology.
but php is not the best language for implement it
A way to work this is using state variables for the different types of data you want to have updated (or not).
In order to avoid re-querying the full tables even if the data set in them has not changed in relation to what a particular client has displayed at any given time, you could maintain a state counter variable for the data type on the server (for example in a dedicated small table) and on the client in a javascript variable.
Whenever an update is done on the data tables on the server, you update the state counter there.
Your AJAX polling calls would then query this state counter, compare it to the corresponding javascript variable, and only do a data-update call if it has changed, updating the local javascript variable to what the server has.
In order to avoid having to poll for each datatype separately, you might want to use an JS object with a member for each datatype.
Note: yes this is all very theoretical, but, hey, so is the question ;)

which is better, searching in javascript or database?

I have a grid(employee grid) which has say 1000-2000 rows.
I display employee name and department in the grid.
When I get data for the grid, I get other detail for the employee too(Date of Birth, location,role,etc)
So the user has option to edit the employee details. when he clicks edit, I need to display other employee details in the pop up. since I have stored all the data in JavaScript, I search for the particular id and display all the details. so the code will be like
function getUserDetails(employeeId){
//i store all the employeedetails in a variable employeeInformation while getting //data for the grid.
for(var i=0;i<employeeInformation.length;i++){
if(employeeInformation[i].employeeID==employeeId){
//display employee details.
}
}
}
the second solution will be like pass employeeid to the database and get all the information for the employee. The code will be like
function getUserDetails(employeeId){
//make an ajax call to the controller which will call a procedure in the database
// to get the employee details
//then display employee details
}
So, which solution do you think will be optimal when I am handling 1000-2000 records.
I don't want to make the JavaScript heavy by storing a lot of data in the page.
UPDATED:
so one of my friend came up with a simple solution.
I am storing 4 columns for 500 rows(average). So I don't think there should not be rapid slowness in the webpage.
while loading the rows to the grid, under edit link, I give the data-rowId as an attribute so that it will be easy to retrieve the data.
say I store all the employee information in a variable called employeeInfo.
when someone clicks the edit link.. $(this).attr('data-rowId') will give the rowId and employeeInfo[$(this).attr('data-rowId')] should give all the information about the employee.
instead of storing the employeeid and looping over the employee table to find the matching employeeid, the rowid should do the trick. this is very simple. but did not strike me.
I would suggest you make an AJAX call to the controller. Because of two main reasons
It is not advisable to handle Database actiity in javascript due to security issues.
Javascript runs on client side machine it should have the least load and computation.
Javascript should be as light as possible. So i suggest you do it in the database itself.
Don't count on JavaScript performance, because it is heavily depend on computer that is running on. I suggest you to store and search on server-side rather than loading heavy payload of data in Browser which is quite restricted to resources of end-user.
Running long loops in JavaScript can lead to an unresponsive and irritating UI. Use Ajax calls to get needed data as a good practice.
Are you using HTML5? Will your users typically have relatively fast multicore computers? If so, a web-worker (http://www.w3schools.com/html/html5_webworkers.asp) might be a way to offload the search to the client while maintaining UI responsiveness.
Note, I've never used a Worker, so this advice may be way off base, but they certainly look interesting for something like this.
In terms of separation of concerns, and recommended best approach, you should be handling that domain-level data retrieval on your server, and relying on the client-side for processing and displaying only the records with which it is concerned.
By populating your client with several thousand records for it to then parse, sort, search, etc., you not only take a huge performance hit and diminish user experience, but you also create many potential security risks. Obviously this also depends on the nature of the data in the application, but for something such as employee records, you probably don't want to be storing that on the client-side. Anyone using the application will then have access to all of that.
The more pragmatic approach to this problem is to have your controller populate the client with only the specific data which pertains to it, eliminating the need for searching through many records. You can also retrieve a single object by making an ajax query to your server to retrieve the data. This has the dual benefit of guaranteeing that you're displaying the current state of the DB, as well as being far more optimized than anything you could ever hope to write in JS.

Knockout.js performance - how many observables?

I am new to Knockout.js and already like it very much.
Say I'm implementing web blog and want add/edit/delete blog post comments with using of Knockout.js. In purpose of this I define Comment viewmodel with subject, text and tags (in my real application i need much more fields, like 10 to 20).
After the message have been edited by user and posted to server I want to refresh it at the screen with the latest values (including those that came from server - say, timestamp). It appears that I need observable (not just simple) properties for every listed field, otherwise the values will not be refreshed at user's screen after postback.
Now, if I have 20 observables per comment and there are 50? 100? comments on the screen, then will it slow browser down much? What about mobile devices? If so, is there another way to achive what I want?
The other possible option is to use viewmodels only for the comment being edited. In this case I somehow need to "unbind" other viewmodels from html elements - ex., delete it and render again. But here I can't see a nice solution.
Interesting question.
The short, simple answer is no.
Browser performance is not really an issue unless you are specifically developing an application that is know or expected to be a performance hit.
A browser is well designed to handle very large amounts of data. Be it downloading new data from a server or rendering DOM elements. I would say a browser could handle over 1000 comments (an educated guess).
Take a look at a Google application (such as calendar) - they tend to process huge amounts of data.
This use case scenario sounds like a perfect match for the mapping plugin:
// Every time data is received from the server:
ko.mapping.fromJS(data, viewModel);
And if you ever get into performance issues, the Viewmodel plugin claims to be several times faster specifically for the task of updating your viewmodel from an updated model.
ko.viewmodel.updateFromModel(viewmodel, updatedModel);

How to design a multi-user ajax web application to be concurrently safe

I have a web page that shows a large amount of data from the server. The communication is done via ajax.
Every time the user interacts and changes this data (Say user A renames something) it tells the server to do the action and the server returns the new changed data.
If user B accesses the page at the same time and creates a new data object it will again tell the server via ajax and the server will return with the new object for the user.
On A's page we have the data with a renamed object. And on B's page we have the data with a new object. On the server the data has both a renamed object and a new object.
What are my options for keeping the page in sync with the server when multiple users are using it concurrently?
Such options as locking the entire page or dumping the entire state to the user on every change are rather avoided.
If it helps, in this specific example the webpage calls a static webmethod that runs a stored procedure on the database. The stored procedure will return any data it has changed and no more. The static webmethod then forwards the return of the stored procedure to the client.
Bounty Edit:
How do you design a multi-user web application which uses Ajax to communicate with the server but avoids problems with concurrency?
I.e. concurrent access to functionality and to data on a database without any risk of data or state corruption
Overview:
Intro
Server architecture
Client architecture
Update case
Commit case
Conflict case
Performance & scalability
Hi Raynos,
I will not discuss any particular product here. What others mentioned is a good toolset to have a look at already (maybe add node.js to that list).
From an architectural viewpoint, you seem to have the same problem that can be seen in version control software. One user checks in a change to an object, another user wants to alter the same object in another way => conflict. You have to integrate users changes to objects while at the same time being able to deliver updates timely and efficiently, detecting and resolving conflicts like the one above.
If I was in your shoes I would develop something like this:
1. Server-Side:
Determine a reasonable level at which you would define what I'd call "atomic artifacts" (the page? Objects on the page? Values inside objects?). This will depend on your webservers, database & caching hardware, # of user, # of objects, etc. Not an easy decision to make.
For each atomic artifact have:
an application-wide unique-id
an incrementing version-id
a locking mechanism for write-access (mutex maybe)
a small history or "changelog" inside a ringbuffer (shared memory works well for those). A single key-value pair might be OK too though less extendable. see http://en.wikipedia.org/wiki/Circular_buffer
A server or pseudo-server component that is able to deliver relevant changelogs to a connected user efficiently. Observer-Pattern is your friend for this.
2. Client-Side:
A javascript client that is able to have a long-running HTTP-Connection to said server above, or uses lightweight polling.
A javascript artifact-updater component that refreshes the sites content when the connected javascript client notifies of changes in the watched artifacts-history. (again an observer pattern might be a good choice)
A javascript artifact-committer component that may request to change an atomic artifact, trying to acquire mutex lock. It will detect if the state of the artifact had been changed by another user just seconds before (latancy of javascript client and commit process factors in) by comparing known clientside artifact-version-id and current serverside artifact-version-id.
A javascript conflict-solver allowing for a human which-change-is-the-right decision. You may not want to just tell the user "Someone was faster than you. I deleted your change. Go cry.". Many options from rather technical diffs or more user-friendly solutions seem possible.
So how would it roll ...
Case 1: kind-of-sequence-diagram for updating:
Browser renders page
javascript "sees" artifacts which each having at least one value field, unique- and a version-id
javascript client gets started, requesting to "watch" the found artifacts history starting from their found versions (older changes are not interesting)
Server process notes the request and continuously checks and/or sends the history
History entries may contain simple notifications "artifact x has changed, client pls request data" allowing the client to poll independently or full datasets "artifact x has changed to value foo"
javascript artifact-updater does what it can to fetch new values as soon as they become known to have updated. It executes new ajax requests or gets feeded by the javascript client.
The pages DOM-content is updated, the user is optionally notified. History-watching continues.
Case 2: Now for committing:
artifact-committer knows the desired new value from user input and sends a change-request to the server
serverside mutex is acquired
Server receives "Hey, I know artifact x's state from version 123, let me set it to value foo pls."
If the Serverside version of artifact x is equal (can not be less) than 123 the new value is accepted, a new version id of 124 generated.
The new state-information "updated to version 124" and optionally new value foo are put at the beginning of the artifact x's ringbuffer (changelog/history)
serverside mutex is released
requesting artifact committer is happy to receive a commit-confirmation together with the new id.
meanwhile serverside server component keeps polling/pushing the ringbuffers to connected clients. All clients watching the buffer of artifact x will get the new state information and value within their usual latency (See case 1.)
Case 3: for conflicts:
artifact committer knows desired new value from user input and sends a change-request to the server
in the meanwhile another user updated the same artifact successfully (see case 2.) but due to various latencies this is yet unknown to our other user.
So a serverside mutex is acquired (or waited on until the "faster" user committed his change)
Server receives "Hey, I know artifact x's state from version 123, let me set it to value foo."
On the Serverside the version of artifact x now is 124 already. The requesting client can not know the value he would be overwriting.
Obviously the Server has to reject the change request (not counting in god-intervening overwrite priorities), releases the mutex and is kind enough to send back the new version-id and new value directly to the client.
confronted with a rejected commit request and a value the change-requesting user did not yet know, the javascript artifact committer refers to the conflict resolver which displays and explains the issue to the user.
The user, being presented with some options by the smart conflict-resolver JS, is allowed another attempt to change the value.
Once the user selected a value he deems right, the process starts over from case 2 (or case 3 if someone else was faster, again)
Some words on Performance & Scalability
HTTP Polling vs. HTTP "pushing"
Polling creates requests, one per second, 5 per second, whatever you regard as an acceptable latency. This can be rather cruel to your infrastructure if you do not configure your (Apache?) and (php?) well enough to be "lightweight" starters. It is desirable to optimize the polling request on the serverside so that it runs for far less time than the length of the polling interval. Splitting that runtime in half might well mean lowering your whole system load by up to 50%,
Pushing via HTTP (assuming webworkers are too far off to support them) will require you to have one apache/lighthttpd process available for each user all the time. The resident memory reserved for each of these processes and your systems total memory will be one very certain scaling limit that you will encounter. Reducing the memory footprint of the connection will be necessary, as well as limiting the amount continuous CPU and I/O work done in each of these (you want lots of sleep/idle time)
backend scaling
Forget database and filesystem, you will need some sort of shared memory based backend for the frequent polling (if the client does not poll directly then each running server process will)
if you go for memcache you can scale better, but its still expensive
The mutex for commits has to work globaly even if you want to have multiple frontend servers to loadbalance.
frontend scaling
regardless if you are polling or receiving "pushes", try to get information for all watched artifacts in one step.
"creative" tweaks
If clients are polling and many users tend to watch the same artifacts, you could try to publish the history of those artifacts as a static file, allowing apache to cache it, nevertheless refreshing it on the serverside when artifacts change. This takes PHP/memcache out of the game some for requests. Lighthttpd is verry efficent at serving static files.
use a content delivery network like cotendo.com to push artifact history there. The push-latency will be bigger but scalability's a dream
write a real server (not using HTTP) that users connect to using java or flash(?). You have to deal with serving many users in one server-thread. Cycling through open sockets, doing (or delegating) the work required. Can scale via forking processes or starting more servers. Mutexes have to remain globaly unique though.
Depending on load scenarios group your frontend- and backend-servers by artifact-id ranges. This will allow for better usage of persistent memory (no database has all the data) and makes it possible to scale the mutexing. Your javascript has to maintain connections to multiple servers at the same time though.
Well I hope this can be a start for your own ideas. I am sure there are plenty more possibilities.
I am more than welcoming any criticism or enhancements to this post, wiki is enabled.
Christoph Strasen
I know this is an old question, but I thought I'd just chime in.
OT (operational transforms) seem like a good fit for your requirement for concurrent and consistent multi-user editing. It's a technique used in Google Docs (and was also used in Google Wave):
There's a JS-based library for using Operational Transforms - ShareJS (http://sharejs.org/), written by a member from the Google Wave team.
And if you want, there's a full MVC web-framework - DerbyJS (http://derbyjs.com/) built on ShareJS that does it all for you.
It uses BrowserChannel for communication between the server and clients (and I believe WebSockets support should be in the works - it was in there previously via Socket.IO, but was taken out due to the developer's issues with Socket.io) Beginner docs are a bit sparse at the moment, however.
I would consider adding time-based modified stamp for each dataset. So, if you're updating db tables, you would change the modified timestamp accordingly. Using AJAX, you can compare the client's modified timestamp with the data source's timestamp - if the user is ever behind, update the display. Similar to how this site checks a question periodically to see if anyone else has answered while you're typing an answer.
You need to use push techniques (also known as Comet or reverse Ajax) to propagate changes to the user as soon as they are made to the db. The best technique currently available for this seems to be Ajax long polling, but it isn't supported by every browser, so you need fallbacks. Fortunately there are already solutions that handle this for you. Among them are: orbited.org and the already mentioned socket.io.
In the future there will be an easier way to do this which is called WebSockets, but it isn't sure yet when that standard will be ready for prime time as there are security concerns about the current state of the standard.
There shouldn't be concurrency problems in the database with new objects. But when a user edits an object the server needs to have some logic that checks whether the object has been edited or deleted in the meantime. If the object has been deleted the solution is, again, simple: Just discard the edit.
But the most difficult problem appears, when multiple users are editing the same object at the same time. If User 1 and 2 start editing an object at the same time, they will both make their edits on the same data. Let's say the changes User 1 made are sent to the server first while User 2 is still editing the data. You then have two options: You could try to merge User 1's changes into the data of User 2 or you could tell User 2 that his data is out of date and display him an error message as soon as his data gets send to the server. The latter isn't very user friendly option here, but the former is very hard to implement.
One of the few implementations that really got this right for the first time was EtherPad, which was acquired by Google. I believe they then used some of EtherPad's technologies in Google Docs and Google Wave, but I can't tell that for sure. Google also opensourced EtherPad, so maybe that's worth a look, depending on what you're trying to do.
It's really not easy to do this simultaneously editing stuff, because it's not possible to do atomic operations on the web because of the latency. Maybe this article will help you to learn more about the topic.
Trying to write all this yourself is a big job, and it's very difficult to get it right. One option is to use a framework that's built to keep clients in sync with the database, and with each other, in realtime.
I've found that the Meteor framework does this well (http://docs.meteor.com/#reactivity).
"Meteor embraces the concept of reactive programming. This means that you can write your code in a simple imperative style, and the result will be automatically recalculated whenever data changes that your code depends on."
"This simple pattern (reactive computation + reactive data source) has wide applicability. The programmer is saved from writing unsubscribe/resubscribe calls and making sure they are called at the right time, eliminating whole classes of data propagation code which would otherwise clog up your application with error-prone logic."
I can't believe that nobody has mentioned Meteor. It's a new and immature framework for sure (and only officially supports one DB), but it takes all the grunt work and thinking out of a multi-user app like the poster is describing. In fact, you can't NOT build a mult-user live-updating app. Here's a quick summary:
Everything is in node.js (JavaScript or CoffeeScript), so you can share stuff like validations between the client and server.
It uses websockets, but can fall back for older browsers
It focuses on immediate updates to local object (i.e. the UI feels snappy), with changes sent to the server in the background. Only atomic updates are allowed to make mixing updates simpler. Updates rejected on the server are rolled back.
As a bonus, it handles live code reloads for you, and will preserves user state even when the app changes radically.
Meteor is simple enough that I would suggest you at least take a look at it for ideas to steal.
These Wikipedia pages may help add perspective to learning about concurrency and concurrent computing for designing an ajax web application that either pulls or is pushed state event (EDA) messages in a messaging pattern. Basically, messages are replicated out to channel subscribers which respond to change events and synchronization requests.
https://en.wikipedia.org/wiki/Category:Concurrency_control
https://en.wikipedia.org/wiki/Distributed_concurrency_control
https://en.wikipedia.org/wiki/CAP_theorem
https://en.wikipedia.org/wiki/Operational_transformation
https://en.wikipedia.org/wiki/Fallacies_of_Distributed_Computing
There are many forms of concurrent web-based collaborative software.
There are a number of HTTP API client libraries for etherpad-lite, a collaborative real-time editor.
django-realtime-playground implements a realtime chat app in Django with various real-time technologies like Socket.io.
Both AppEngine and AppScale implement the AppEngine Channel API; which is distinct from the Google Realtime API, which is demonstrated by googledrive/realtime-playground.
Server-side push techniques are the way to go here. Comet is (or was?) a buzz word.
The particular direction you take depends heavily on your server stack, and how flexible you/it is. If you can, I would take a look at socket.io, which provides a cross-browser implementation of websockets, which provide a very streamline way to have bidirectional communication with the server, allowing the server to push updates to the clients.
In particular, see this demonstration by the library's author, which demonstrates almost exactly the situation you describe.

Ajax "Is there new content? If so, update page" - How to do this without breaking the server?

It's a simple case of a javascript that continuously asks "are there yet?" Like a four year old on a car drive.. But, much like parents, if you do this too often or, with too many kids at once, the server will buckle under pressure..
How do you solve the issue of having a webpage that looks for new content in the order of every 5 seconds and that allows for a larger number of visitors?
stackoverflow does it some way, don't know how though.
The more standard way would indeed be the javascript that looks for new content every few seconds.
A more advanced way would use a push-like technique, by using Comet techniques (long-polling and such). There's a lot of interesting stuff under that link.
I'm still waiting for a good opportunity to use it myself...
Oh, and here's a link from stackoverflow about it:
Is there some way to PUSH data from web server to browser?
In Java I used Ajax library (DWR) using Comet technology - I think you should search for library in PHP using it.
The idea is that server is sending one very long Http response and when it has something to send to the client it ends it and send new response with updated data.
Using it client doens't have to ping server every x seconds to get new data - I think it could help you.
You could make the poll time variable depending on the number of clients. Using your metaphor, the kid asks "Are we there yet?" and the driver responds "No, but maybe in an hour". Thankfully, Javascript isn't a stubborn kid so you can be sure he won't bug you until then.
You could consider polling every 5 seconds to start with, but after a while start to increase the poll interval time - perhaps up to some upper limit (1 minute, 5 minute - whatever seems optimal for your usage). The increase doesn't have to be linear.
A more sophisticated spin (which could incorporate monzee's suggestion to vary by number of clients), would be to allow the server to dictate the interval before next poll. The server could then increase the intervale over time, and you can even change the algorithm on the fly, or in response to network load.
You could take a look at the 'Twisted' framework in python. It's event-driven network programming framework that might satisfy what you are looking for. It can be used to push messages from the server.
Perhaps you can send a query to a real simple script, that doesn't need to make a real db-query, but only uses a simple timestamp to tell if there is anything new.
And then, if the answer is true, you can do a real query, where the server has to do real work !-)
I would have a single instance calling the DB and if a newer timestamp exists, put that new timestamp in a application variable. Then let all sessions check against that application variable. Or something like that. That way only one innstance are calling the sql-server and the number of clients does'nt matter.
I havent tried this and its just the first idéa on the top of the head but I think that cashe the timestamp and let the clients check the cashe is a way to do it, and how to implement the cashe (sql-server-cashe, application variable and so on) I dont know whats best.
Regarding how SO does it, note that it doesn't check for new answers continuously, only when you're typing into the "Your Answer" box.
The key then, is to first do a computationally cheap operation to weed out common "no update needed" cases (e.g., entering a new answer or checking a timestamp) before initiating a more expensive process to actually retrieve any changes.
Alternately, depending on your application, you may be able to resolve this by optimizing your change-publishing mechanism. For example, perhaps it might be feasible for changes (or summaries of them) to be put onto an RSS feed and have clients watch the feed instead of the real application. We can assume that this would be fairly efficient, as it's exactly the sort of thing RSS is designed and optimized for, plus it would have the additional benefit of making your application much more interoperable with the rest of the world at little or no cost to you.
I believe the approach shd be based on a combination of server-side sockets and client-side ajax/comet. Like:
Assume a chat application with several logged on users, and that each of them is listening via a slow-load AJAX call to the server-side listener script.
Whatever browser gets the just-entered data submits it to the server with an ajax call to a writer script. That server updates the database (or storage system) and posts a sockets write to noted listener script. The latter then gets the fresh data and posts it back to the client browser.
Now I haven't yet written this, and right now I dunno whether/how the browser limit of two concurrent connections screws up the above logic.
Will appreciate hearing fm anyone with thoughts here.
AS

Categories