Javascript: Weather API design question with hourly polling - javascript

I am doing practice interviews and specifically prepping for the Design portion. One mentions:
Design a weather widget that pull data from a service API which makes data available every hour. Avoid pulling the data from it all the time if there are no changes. Then what happens when you scale this to lots of users.
My first thought would be obviously create a function that fetches the data from the GET endpoint and then parsing the JSON.
The part that would throw me off though is: "Avoid pulling the data from it all the time if there are no changes" . How can I know there are no changes without first pulling the data? My only thought would be to create a ignore flag:
Pull the data, mark the temperature as 55 Degrees. Create a flag that ignores values of +/- 3 degrees from this temperature value.
Next hour, pull the data and see the temperature is 56 Degrees. That is within the ignore flag range: (ex: if (Math.abs(temperature - nextTemp) > 3) { ignoreFor5Hours = true; } . Then this will stop the hourly pulling for 5 hours, or however long someone set it to.
Does this make sense or am I thinking about this the wrong way?

Assuming the data is not updated regularly
It sounds quite confusing as there should be no method for the client side to actively know whether there is an update on the data before pulling it from the server.
One way I would suggest is to use two-ways communication, such as socket.io. That is, you establish a connection to the server, once there is an update, the server can initialize a call to your client app for fetching the data.
Another way is to use long pulling, or just like your interval fetching, to pull a hash from the server and check if the hash changed, which is also not ideal as you have to also load your server with a hanging request, but at least the data traffic will be smaller.
These methods are obviously not optimal, but if you must follow the guideline and it means what it means, those can be your options
If the data is updated regularly
Go with the caching option provided by Phil

I would do nothing special at all.
It should be up to the API to specify the appropriate Cache-Control headers. The browser will handle the caching, so subsequent fetches will use the cache if applicable.
The API knows how fresh its data is, and knows when it expects to be updated. It's also the case with weather that certain weather patterns change faster than others. That's why it's up to the API to decide what to do.

Related

architecture for get and store api request data

This is more of a architectural questions. An external platform had product and price information for let's say, books. There is an API available to get this information.
What I read is that it should be possible to create a function in Javascript and connect the Javascript to a page where you want to show the data on my own website. This would mean that for each page request an API-call is made. Since the requested information only changes once a day maximum this does not sound the most efficient solution.
Can someone advise a better solution? Something into the direction of a similar php or javascript function that does the request on the background, schedule an update and import the data into mysql? If so, what language would be most common.
I need the solution for a Joomla/php/mysql environment
Here's a simple idea - fetch and store results from the API (ones you think aren't gonna change in a day), either on disk, or in the database, and later use these stored results to retrieve what you otherwise would've fetched from the API.
Since storing anything in frontend JS across page reloads isn't easy, you need to make use of PHP for that. Based on what's given, you seem to have two ways of calling the API:
via the frontend JS (no-go)
via your PHP backend (good-to-go)
Now, you need to make sure your results are synced every (say) 24 hours.
Add a snippet to your PHP code that contains a variable $lastUpdated (or something similar), and assign it the "static" value of the current time (NOT using time()). Now, add a couple of statements to update the stored results if the current time is at least 24 hours greater than $lastUpdated, followed by updating $lastUpdated to current time.
This should give you what you need with one API call per day.
PS: I'm not an expert in PHP, but you can surely figure out the datetime stuff.
It sounds like you need a cache, and you're not the first person to run into that problem - so you probably don't need to reinvent the wheel and build your own.
Look into something like Redis. There's an article on it available here as well: https://www.compose.com/articles/api-caching-with-redis-and-nodejs/

Should I use a cache for this?

I made a code this summer holidays and today I look for the first time at my code again, and I am strugging on one thing I did.
My system is a system with multiple types (pages, newsletters etc.) and multiple subtypes (items, archive, concepts etc.). The idea now I have an object like this:
object { 1: { normal: { 1: { content: 'somecontent', title: 'sometitle' } } } }
Another example:
object { 1: { normal: { 1: { content: 'somecontent', title: 'sometitle' } }, archive: {} }, 2: { normal: {} } }
The data originally comes from the database. I'm making a system to edit pages on the website and other things like newsletters. Because I have multiple types and subtypes.
I made a cache for the reason I don't want to get all items from the database every time. But now the problem is if I add an item, edit an item and remove an item I have to delete it from the cache / edit / add.
My question: is this a good way? I thought it is because you don't have to call an AJAX file to get the data from the database.
I'm sorry if I'm not allowed to ask this here.
My question: is this a good way? I thought it is because you don't
have to call an AJAX file to get the data from the database.
The answer is that "it depends". There is no always right and always wrong answer for caching because caching is a tradeoff between efficiency and timeliness of data.
If you want maximum efficiency, you cache like crazy, but your data may not be perfectly up to date because you're using old data from the cache.
If you want the most up-to-date data, you don't cache anything so you always get the latest data, but obviously efficiency may suffer if you are regular requesting the same data over and over.
So, it's a tradeoff and the tradeoff depends entirely upon the application, its needs, how often the data is modified and what the consequences are for having stale data or for not caching. There is no single right or wrong answer for that tradeoff. It depends entirely upon the particular situation for your application and the tradeoff may even be different for some types of data vs. others within the same application.
For example, let's supposed you were writing an online bidding site that offered some functionality like eBay. You would probably be fine caching the item description for at least several hours because that almost never changes and even if it does, the consequences of being a bit tardy on seeing a new item description are fairly low. But, you could never cache the data on the current bid because the timeliness of that information is critical. The user needs to always see the latest info on the current bid, even if you have to make some sacrifices in efficiency.
Also, remember that caching isn't completely all or none. You can set a lifetime for a cached value such that it can only be used for a certain period of time that is appropriate for the type of data. For example, you might cache an item description in the above auction for up to 2 hours. This allows you to achieve some efficiency gains, but also to eventually see the new data if it happens to change.
In general, you have to review the consequences of showing stale data. If the consequences for having data that is even minutes out of date are high (like the latest price in a live auction), then you can't cache that data at all.
If the consequences of having data that is even hours out of date are low, then you can likely cache that value for at least several hours - maybe even longer.
And, when considering what to cache, you obviously want to first look at the items that are most requested and are the most expensive on your server to retrieve. Some analysis of the usage pattern on your server would give you a prioritized list of candidates to consider for caching.
My question: is this a good way? I thought it is because you don't
have to call an AJAX file to get the data from the database.
This is fine if
1) You want to provide offline reading continuity to the user. User doesn't have to wait for internet connection to be available so that they can read at any time.
2) Your data-service is quite heavy and you want to avoid multiple/frequent visits to the server to get the same data over and over again.
3) You want your app to be bundled with a native package (like phonegap) to become a hybrid app and give a complete offline experience to the user.
This is not a comprehensive list, but just to get your started in terms of when to go for offline and when to keep totally offline
So, on the other hand, this is a bad idea if
1) Your local storage structure is going to change frequently for user to require re-install (unless you can figure out auto-upgrate of local storage)
2) All your features are transactional and require synch with other users also.
Nothing wrong with your approach, just make sure you have kept these points in mind while managing client-side cache
You have one variable 'version' maintained, this version is to be increased whenever there's any change in structure, this version will be sent to client every time, client is responsible for comparison of versions and empty client cache if server version is greater than client version.
You can implement or find any open-sources to handle your ajax responses, this one might be useful - https://github.com/SaneMethod/jquery-ajax-localstorage-cache.
you can set proper expiry tag from server, which can also help, browser to cache response for you, if it is 'get' request.
You can also implement server-side cache, which will not make calls to database, it will cache response against request-url, Note - if different users are supposed to receive different response than this approach wont work. You can delete the cache if any changes happens related to that particular data set - delete/update
In your case you can also maintain flags on server, which simply tells if data has been updated or not the time of article update, if stored version is older you can make server-request or just use local version.
I hope it helps.

Caching information from API queries - Limited to 10 per 10s

relatively new to databases here (and dba).
I've been recently looking into Riot Games' APIs, however now realising that you're limited to 10 calls per 10 seconds, I need to change my front-end code that was originally just loading all the information with lots of and lots of API calls into something that uses a MySQL database.
I would like to collect ranked data about each player and list them (30+ players) in an ordered list of ranking. I was thinking, as mentioned in their Rate Limiting Page, "caching" data when GET-ing it, and then when needing that information again, check if it is still relevant - if so use it, if not re-GET it.
Is the idea of adding a time of 30 minutes (the rough length of a game) in the future to a column in a table, and when calling check whether server time is ahead of the saved time. Is this the right approach/idea of caching - If not, what is the best practice of doing so?
Either way, this doesn't solve the problem of loading 30+ values for the first time, when no previous calls have been made to cache.
Any advice would be welcome, even advice telling me I'm doing completely the wrong thing!
If there is more information needed I can edit it in, let me know.
tl;dr What's best practice to get around Rate-Limiting?
Generally yes, most of the large applications simply put guesstimate rate limits, or manual cache (check DB for recent call, then go to API if its an old call).
When you use large sites like op.gg or lolKing for Summoner look ups, they all give you a "Must wait X minutes before doing another DB check/Call", I also do this. So yes, giving an estimated number (like a game length) to handle your rate limit is definitely a common practice that I have observed within the Riot Developer community. Some people do go all out and implement actual caching though with actual caching layers/frameworks, but you don't need to do that with smaller applications.
I recommend building up your app's main functionality first, submit it, and get it approved for a higher rate limit as well. :)
Also you mentioned adjusting your front-end code for calls, make sure your API calls are in server-side code for security concerns.

How to design a multi-user ajax web application to be concurrently safe

I have a web page that shows a large amount of data from the server. The communication is done via ajax.
Every time the user interacts and changes this data (Say user A renames something) it tells the server to do the action and the server returns the new changed data.
If user B accesses the page at the same time and creates a new data object it will again tell the server via ajax and the server will return with the new object for the user.
On A's page we have the data with a renamed object. And on B's page we have the data with a new object. On the server the data has both a renamed object and a new object.
What are my options for keeping the page in sync with the server when multiple users are using it concurrently?
Such options as locking the entire page or dumping the entire state to the user on every change are rather avoided.
If it helps, in this specific example the webpage calls a static webmethod that runs a stored procedure on the database. The stored procedure will return any data it has changed and no more. The static webmethod then forwards the return of the stored procedure to the client.
Bounty Edit:
How do you design a multi-user web application which uses Ajax to communicate with the server but avoids problems with concurrency?
I.e. concurrent access to functionality and to data on a database without any risk of data or state corruption
Overview:
Intro
Server architecture
Client architecture
Update case
Commit case
Conflict case
Performance & scalability
Hi Raynos,
I will not discuss any particular product here. What others mentioned is a good toolset to have a look at already (maybe add node.js to that list).
From an architectural viewpoint, you seem to have the same problem that can be seen in version control software. One user checks in a change to an object, another user wants to alter the same object in another way => conflict. You have to integrate users changes to objects while at the same time being able to deliver updates timely and efficiently, detecting and resolving conflicts like the one above.
If I was in your shoes I would develop something like this:
1. Server-Side:
Determine a reasonable level at which you would define what I'd call "atomic artifacts" (the page? Objects on the page? Values inside objects?). This will depend on your webservers, database & caching hardware, # of user, # of objects, etc. Not an easy decision to make.
For each atomic artifact have:
an application-wide unique-id
an incrementing version-id
a locking mechanism for write-access (mutex maybe)
a small history or "changelog" inside a ringbuffer (shared memory works well for those). A single key-value pair might be OK too though less extendable. see http://en.wikipedia.org/wiki/Circular_buffer
A server or pseudo-server component that is able to deliver relevant changelogs to a connected user efficiently. Observer-Pattern is your friend for this.
2. Client-Side:
A javascript client that is able to have a long-running HTTP-Connection to said server above, or uses lightweight polling.
A javascript artifact-updater component that refreshes the sites content when the connected javascript client notifies of changes in the watched artifacts-history. (again an observer pattern might be a good choice)
A javascript artifact-committer component that may request to change an atomic artifact, trying to acquire mutex lock. It will detect if the state of the artifact had been changed by another user just seconds before (latancy of javascript client and commit process factors in) by comparing known clientside artifact-version-id and current serverside artifact-version-id.
A javascript conflict-solver allowing for a human which-change-is-the-right decision. You may not want to just tell the user "Someone was faster than you. I deleted your change. Go cry.". Many options from rather technical diffs or more user-friendly solutions seem possible.
So how would it roll ...
Case 1: kind-of-sequence-diagram for updating:
Browser renders page
javascript "sees" artifacts which each having at least one value field, unique- and a version-id
javascript client gets started, requesting to "watch" the found artifacts history starting from their found versions (older changes are not interesting)
Server process notes the request and continuously checks and/or sends the history
History entries may contain simple notifications "artifact x has changed, client pls request data" allowing the client to poll independently or full datasets "artifact x has changed to value foo"
javascript artifact-updater does what it can to fetch new values as soon as they become known to have updated. It executes new ajax requests or gets feeded by the javascript client.
The pages DOM-content is updated, the user is optionally notified. History-watching continues.
Case 2: Now for committing:
artifact-committer knows the desired new value from user input and sends a change-request to the server
serverside mutex is acquired
Server receives "Hey, I know artifact x's state from version 123, let me set it to value foo pls."
If the Serverside version of artifact x is equal (can not be less) than 123 the new value is accepted, a new version id of 124 generated.
The new state-information "updated to version 124" and optionally new value foo are put at the beginning of the artifact x's ringbuffer (changelog/history)
serverside mutex is released
requesting artifact committer is happy to receive a commit-confirmation together with the new id.
meanwhile serverside server component keeps polling/pushing the ringbuffers to connected clients. All clients watching the buffer of artifact x will get the new state information and value within their usual latency (See case 1.)
Case 3: for conflicts:
artifact committer knows desired new value from user input and sends a change-request to the server
in the meanwhile another user updated the same artifact successfully (see case 2.) but due to various latencies this is yet unknown to our other user.
So a serverside mutex is acquired (or waited on until the "faster" user committed his change)
Server receives "Hey, I know artifact x's state from version 123, let me set it to value foo."
On the Serverside the version of artifact x now is 124 already. The requesting client can not know the value he would be overwriting.
Obviously the Server has to reject the change request (not counting in god-intervening overwrite priorities), releases the mutex and is kind enough to send back the new version-id and new value directly to the client.
confronted with a rejected commit request and a value the change-requesting user did not yet know, the javascript artifact committer refers to the conflict resolver which displays and explains the issue to the user.
The user, being presented with some options by the smart conflict-resolver JS, is allowed another attempt to change the value.
Once the user selected a value he deems right, the process starts over from case 2 (or case 3 if someone else was faster, again)
Some words on Performance & Scalability
HTTP Polling vs. HTTP "pushing"
Polling creates requests, one per second, 5 per second, whatever you regard as an acceptable latency. This can be rather cruel to your infrastructure if you do not configure your (Apache?) and (php?) well enough to be "lightweight" starters. It is desirable to optimize the polling request on the serverside so that it runs for far less time than the length of the polling interval. Splitting that runtime in half might well mean lowering your whole system load by up to 50%,
Pushing via HTTP (assuming webworkers are too far off to support them) will require you to have one apache/lighthttpd process available for each user all the time. The resident memory reserved for each of these processes and your systems total memory will be one very certain scaling limit that you will encounter. Reducing the memory footprint of the connection will be necessary, as well as limiting the amount continuous CPU and I/O work done in each of these (you want lots of sleep/idle time)
backend scaling
Forget database and filesystem, you will need some sort of shared memory based backend for the frequent polling (if the client does not poll directly then each running server process will)
if you go for memcache you can scale better, but its still expensive
The mutex for commits has to work globaly even if you want to have multiple frontend servers to loadbalance.
frontend scaling
regardless if you are polling or receiving "pushes", try to get information for all watched artifacts in one step.
"creative" tweaks
If clients are polling and many users tend to watch the same artifacts, you could try to publish the history of those artifacts as a static file, allowing apache to cache it, nevertheless refreshing it on the serverside when artifacts change. This takes PHP/memcache out of the game some for requests. Lighthttpd is verry efficent at serving static files.
use a content delivery network like cotendo.com to push artifact history there. The push-latency will be bigger but scalability's a dream
write a real server (not using HTTP) that users connect to using java or flash(?). You have to deal with serving many users in one server-thread. Cycling through open sockets, doing (or delegating) the work required. Can scale via forking processes or starting more servers. Mutexes have to remain globaly unique though.
Depending on load scenarios group your frontend- and backend-servers by artifact-id ranges. This will allow for better usage of persistent memory (no database has all the data) and makes it possible to scale the mutexing. Your javascript has to maintain connections to multiple servers at the same time though.
Well I hope this can be a start for your own ideas. I am sure there are plenty more possibilities.
I am more than welcoming any criticism or enhancements to this post, wiki is enabled.
Christoph Strasen
I know this is an old question, but I thought I'd just chime in.
OT (operational transforms) seem like a good fit for your requirement for concurrent and consistent multi-user editing. It's a technique used in Google Docs (and was also used in Google Wave):
There's a JS-based library for using Operational Transforms - ShareJS (http://sharejs.org/), written by a member from the Google Wave team.
And if you want, there's a full MVC web-framework - DerbyJS (http://derbyjs.com/) built on ShareJS that does it all for you.
It uses BrowserChannel for communication between the server and clients (and I believe WebSockets support should be in the works - it was in there previously via Socket.IO, but was taken out due to the developer's issues with Socket.io) Beginner docs are a bit sparse at the moment, however.
I would consider adding time-based modified stamp for each dataset. So, if you're updating db tables, you would change the modified timestamp accordingly. Using AJAX, you can compare the client's modified timestamp with the data source's timestamp - if the user is ever behind, update the display. Similar to how this site checks a question periodically to see if anyone else has answered while you're typing an answer.
You need to use push techniques (also known as Comet or reverse Ajax) to propagate changes to the user as soon as they are made to the db. The best technique currently available for this seems to be Ajax long polling, but it isn't supported by every browser, so you need fallbacks. Fortunately there are already solutions that handle this for you. Among them are: orbited.org and the already mentioned socket.io.
In the future there will be an easier way to do this which is called WebSockets, but it isn't sure yet when that standard will be ready for prime time as there are security concerns about the current state of the standard.
There shouldn't be concurrency problems in the database with new objects. But when a user edits an object the server needs to have some logic that checks whether the object has been edited or deleted in the meantime. If the object has been deleted the solution is, again, simple: Just discard the edit.
But the most difficult problem appears, when multiple users are editing the same object at the same time. If User 1 and 2 start editing an object at the same time, they will both make their edits on the same data. Let's say the changes User 1 made are sent to the server first while User 2 is still editing the data. You then have two options: You could try to merge User 1's changes into the data of User 2 or you could tell User 2 that his data is out of date and display him an error message as soon as his data gets send to the server. The latter isn't very user friendly option here, but the former is very hard to implement.
One of the few implementations that really got this right for the first time was EtherPad, which was acquired by Google. I believe they then used some of EtherPad's technologies in Google Docs and Google Wave, but I can't tell that for sure. Google also opensourced EtherPad, so maybe that's worth a look, depending on what you're trying to do.
It's really not easy to do this simultaneously editing stuff, because it's not possible to do atomic operations on the web because of the latency. Maybe this article will help you to learn more about the topic.
Trying to write all this yourself is a big job, and it's very difficult to get it right. One option is to use a framework that's built to keep clients in sync with the database, and with each other, in realtime.
I've found that the Meteor framework does this well (http://docs.meteor.com/#reactivity).
"Meteor embraces the concept of reactive programming. This means that you can write your code in a simple imperative style, and the result will be automatically recalculated whenever data changes that your code depends on."
"This simple pattern (reactive computation + reactive data source) has wide applicability. The programmer is saved from writing unsubscribe/resubscribe calls and making sure they are called at the right time, eliminating whole classes of data propagation code which would otherwise clog up your application with error-prone logic."
I can't believe that nobody has mentioned Meteor. It's a new and immature framework for sure (and only officially supports one DB), but it takes all the grunt work and thinking out of a multi-user app like the poster is describing. In fact, you can't NOT build a mult-user live-updating app. Here's a quick summary:
Everything is in node.js (JavaScript or CoffeeScript), so you can share stuff like validations between the client and server.
It uses websockets, but can fall back for older browsers
It focuses on immediate updates to local object (i.e. the UI feels snappy), with changes sent to the server in the background. Only atomic updates are allowed to make mixing updates simpler. Updates rejected on the server are rolled back.
As a bonus, it handles live code reloads for you, and will preserves user state even when the app changes radically.
Meteor is simple enough that I would suggest you at least take a look at it for ideas to steal.
These Wikipedia pages may help add perspective to learning about concurrency and concurrent computing for designing an ajax web application that either pulls or is pushed state event (EDA) messages in a messaging pattern. Basically, messages are replicated out to channel subscribers which respond to change events and synchronization requests.
https://en.wikipedia.org/wiki/Category:Concurrency_control
https://en.wikipedia.org/wiki/Distributed_concurrency_control
https://en.wikipedia.org/wiki/CAP_theorem
https://en.wikipedia.org/wiki/Operational_transformation
https://en.wikipedia.org/wiki/Fallacies_of_Distributed_Computing
There are many forms of concurrent web-based collaborative software.
There are a number of HTTP API client libraries for etherpad-lite, a collaborative real-time editor.
django-realtime-playground implements a realtime chat app in Django with various real-time technologies like Socket.io.
Both AppEngine and AppScale implement the AppEngine Channel API; which is distinct from the Google Realtime API, which is demonstrated by googledrive/realtime-playground.
Server-side push techniques are the way to go here. Comet is (or was?) a buzz word.
The particular direction you take depends heavily on your server stack, and how flexible you/it is. If you can, I would take a look at socket.io, which provides a cross-browser implementation of websockets, which provide a very streamline way to have bidirectional communication with the server, allowing the server to push updates to the clients.
In particular, see this demonstration by the library's author, which demonstrates almost exactly the situation you describe.

Ajax "Is there new content? If so, update page" - How to do this without breaking the server?

It's a simple case of a javascript that continuously asks "are there yet?" Like a four year old on a car drive.. But, much like parents, if you do this too often or, with too many kids at once, the server will buckle under pressure..
How do you solve the issue of having a webpage that looks for new content in the order of every 5 seconds and that allows for a larger number of visitors?
stackoverflow does it some way, don't know how though.
The more standard way would indeed be the javascript that looks for new content every few seconds.
A more advanced way would use a push-like technique, by using Comet techniques (long-polling and such). There's a lot of interesting stuff under that link.
I'm still waiting for a good opportunity to use it myself...
Oh, and here's a link from stackoverflow about it:
Is there some way to PUSH data from web server to browser?
In Java I used Ajax library (DWR) using Comet technology - I think you should search for library in PHP using it.
The idea is that server is sending one very long Http response and when it has something to send to the client it ends it and send new response with updated data.
Using it client doens't have to ping server every x seconds to get new data - I think it could help you.
You could make the poll time variable depending on the number of clients. Using your metaphor, the kid asks "Are we there yet?" and the driver responds "No, but maybe in an hour". Thankfully, Javascript isn't a stubborn kid so you can be sure he won't bug you until then.
You could consider polling every 5 seconds to start with, but after a while start to increase the poll interval time - perhaps up to some upper limit (1 minute, 5 minute - whatever seems optimal for your usage). The increase doesn't have to be linear.
A more sophisticated spin (which could incorporate monzee's suggestion to vary by number of clients), would be to allow the server to dictate the interval before next poll. The server could then increase the intervale over time, and you can even change the algorithm on the fly, or in response to network load.
You could take a look at the 'Twisted' framework in python. It's event-driven network programming framework that might satisfy what you are looking for. It can be used to push messages from the server.
Perhaps you can send a query to a real simple script, that doesn't need to make a real db-query, but only uses a simple timestamp to tell if there is anything new.
And then, if the answer is true, you can do a real query, where the server has to do real work !-)
I would have a single instance calling the DB and if a newer timestamp exists, put that new timestamp in a application variable. Then let all sessions check against that application variable. Or something like that. That way only one innstance are calling the sql-server and the number of clients does'nt matter.
I havent tried this and its just the first idéa on the top of the head but I think that cashe the timestamp and let the clients check the cashe is a way to do it, and how to implement the cashe (sql-server-cashe, application variable and so on) I dont know whats best.
Regarding how SO does it, note that it doesn't check for new answers continuously, only when you're typing into the "Your Answer" box.
The key then, is to first do a computationally cheap operation to weed out common "no update needed" cases (e.g., entering a new answer or checking a timestamp) before initiating a more expensive process to actually retrieve any changes.
Alternately, depending on your application, you may be able to resolve this by optimizing your change-publishing mechanism. For example, perhaps it might be feasible for changes (or summaries of them) to be put onto an RSS feed and have clients watch the feed instead of the real application. We can assume that this would be fairly efficient, as it's exactly the sort of thing RSS is designed and optimized for, plus it would have the additional benefit of making your application much more interoperable with the rest of the world at little or no cost to you.
I believe the approach shd be based on a combination of server-side sockets and client-side ajax/comet. Like:
Assume a chat application with several logged on users, and that each of them is listening via a slow-load AJAX call to the server-side listener script.
Whatever browser gets the just-entered data submits it to the server with an ajax call to a writer script. That server updates the database (or storage system) and posts a sockets write to noted listener script. The latter then gets the fresh data and posts it back to the client browser.
Now I haven't yet written this, and right now I dunno whether/how the browser limit of two concurrent connections screws up the above logic.
Will appreciate hearing fm anyone with thoughts here.
AS

Categories