I am creating a Chrome extension that checks gmail inbox.
I'm using the xml feed url to fetch it:
https://mail.google.com/mail/u/0/feed/atom
For updating, I'm using chrome.alarms API that sends GET request every 10 seconds. Is that too much? If not, could I change it to 1 second too? How much load does their server have to handle in order to send me the feed's information?
Using XHR every 10 seconds looks like not bad idea, but every second might be too much. Each XHR request creates new connection, sending lots of data to server, and lots of data will receive. If you need a real-time app, please consider to using Websocket or Socket.io instead. They are very lightweight, fast and easy to use.
Notification APIs (gmail seems to have them)
You can get updates with a lot less latency if you use a different technique such as "long polling" or "web sockets". This is very useful for things where you want no lag, like a real-time chat, a web-based video game, or a time-sensitive process like an auction or ticket/order queue. It might be a bit less important for something less real-time, like email (see the last paragraph in this answer).
Gmail seems to have an API that is explicitly designed for fast notifications.
HTTP polling
Most web servers can stand up to being called once a second without killing their site. If they can't, then they have some serious security problems (Denial of Service attacks, or worse if their site is buggy enough to suffer data loss when overloaded).
Google has big servers, and protection, so you don't really need to worry about what they can handle, as long as they don't block you. Google may rate limit calls to their gmail API, and you may end up getting a user throttled if you call their API more then they prefer. Consult their documentation to find out what their rate limiting policies are.
To more generically answer you question, normal HTTP isn't really optimized for frequent polling for refreshed data. You can make a decent number of requests (even upwards of one a second or more). You probably won't kill the computer or browser or even make them run slowly, as long as the request and response payload data is small enough, and the processing/dom changes you do with the response are minimal when the data comes back unchanged.
Assuming you don't violate per-site rate limits, and have small data payloads in the request you are making, then the biggest problem is that you might still be wasting a lot of bandwidth. For people who have to pay by the minute/megabyte, this is a problem. This is much more frequent in Europe than in the United States (although it is also frequent on cellular devices).
Consider if you really need email to be checked every second. Would every ten seconds be fine? Maybe every minute? Maybe your app could do a refresh when you send mail, but take longer to check for new mail when it is idle? Think about what use cases you are solving for before assuming everything always has to be constantly updated. What would waiting a few extra seconds break? If the answer is nothing, then you can safely slow down your updates. If there are use cases that would be broken or annoying, then figure out why it is important. See if there are even better ways to address that use case, or if updating frequently is actually the right answer. Sometimes it is!
Related
I'm trying to develop chat system in php, sql and ajax. I created function by ajax to get messages from database this function its event when window upload, so if i open 2 windows in browser to test the application, I found the messages bu when i send message it appear in just the window which send from not both of the 2 windows. To solve this problem i used setInterval function every 1 second to show messages.
Do this huge requests damage the server ??
I don't quite know what you meant with "Damage", but nothing can be really damaged by a few extra requests.
If you're wondering whether the webserver can handle the load, it really depends on how many chat sessions are going at the same time. Any decent web server should be able to handle a lot more than two requests per second. If you have thousands of chat sessions open, or you have very CPU intensive code, then you may notice issues.
A bigger issue may be your network latency. If your network takes more than a second for a round-trip communication with the server, then you may end up with multiple requests coming from the same client at the same time.
I'm creating a personal application which displays stock quotes realtime (updating every second), and I was wondering what was the best way to approach this project?
I'm going to query using Yahoo YQL: example query.
I've been researching WebSockets and sockets.io, but I don't believe you can use this unless you own the server with the data. Is this approach not possible?
Send an XMLHttpRequest every second? This seems really bad for some reason, just seeing all the requests in the developer tools makes me cringe and my laptop heat up.
Any thoughts? I've heard of people using an iframe or something to make the requests?
I cannot for the love of programming figure out how Google and Yahoo do it.
An IFRAME that updates each second would have similar effect than an AJAX request every second. Some pages uses an IFRAME that refresh each X time, but there is no magic there, an IFRAME is like another browser window inside the web page.
You are right about websockets, the server must expose a websocket endpoint, otherwise is not possible. If you have this option, go for it.
There is other push technology named "Server Sent Events" (aka SSE, Event Source): http://caniuse.com/eventsource Again the server must expose it, but it basically allows the client to keep a persistent connection to the server, and this push events to the client. Again if you have this option, and websocket is not available, go for it.
If you are not in control of the server, and the only provided option is regular HTTP calls, I guess you have no other option. Please mind that some trading providers limit the amount of requests you can do per minute, or limit the amount of times the information changes per minute, so maybe doing one per second your are not achieving anything relevant... or you can get yourself banned.
I'm pretty new to javascript and API's but I think a google or yahoo API (Application Programming Interface) would be appropriate to link the stock quotes to your app.
I have a page that uses a few timers and ajax calls to make it dynamic ie if i change anything on my ipad the page updates on my laptop ..and it querys a database and updates...
will this have an impact on my bandwidth because it constantly updates? anything to be worried about?
Each AJAX call will create a connection to the server (unless an existing keep-alive connection is re-used) and send a HTTP request. This is extremely small though so it will not affect your network performance in a noticeable way.
However, for this kind of real-time notification polling is a bad idea. All somewhat modern browsers support WebSockets nowadays which use one persistent connection to transmit data.
Using a few timers and AJAX calls will impact the bandwidth, more so if you're on an iPad / iPhone somewhere using your prepaid Internet minutes on a SIM card etc.
The amount of impact it will create is, however dependent on the frequency and the actual response. You have the following options to make it as painless as possible (bandwidth-wise):
minimize the AJAX response size - make it as small as possible if nothing changes, ideally completely empty
stop making AJAX calls when application is in the background - on IOS, it's possible to tell when the app is on a background, so if you don't need visual updates at that time but the app is still running, simply stop requesting them.
if you do need to be notified with the app on background, you are best to use Push notifications (as the app can get killed, paused, suspended while in the background)
as pointed out in the previous comments, you can use long polling to replicate Push notifications as well
Hope this helps ;-)
Now a days real time updates are common in most popular sites which have heavy usages.
StackExchange
Facebook
Twitter
I'm wondering how do these "real time updates" work? I'm just looking for a general bird's view perspective. I suspect that the JS can't be calling the server every X seconds for an update and then appending that to the <ul>. Is a notification sent from the server went to pull more content?
Would be great if there is a simple how to article that explains this with a demo?
Stack Overflow is using Web Sockets for real time updates. If you take a look in the source code (2012 source code), you would see:
StackExchange.ready(function () {
StackExchange.realtime.init('ws://sockets.ny.stackexchange.com');
StackExchange.realtime.subscribeToInboxNotifications();
StackExchange.realtime.subscribeToReputationNotifications('1');
});
But note that some Opera versions do not support WebSocket. (not until Opera 10.70)
However Facebook does not seem to be using Web Sockets, and I think they are just using simple XHR with a technique called long polling, which the server holds on to the connection until there is new information, and then respond to the request. If you open up the developer tools you can see that there is always one request which has a status of pending.
It is indeed, sending a request every ~60 seconds.
It seems that Twitter also uses simple XHR (1 minute intervals) for their "real time updates".
Facebook uses long polling/Comet. So it makes a connection and waits for a response, if no response, then it times out and tries again. The timeout is around 40 secs. That's how it does most of the instant updating. However they use a combination of techniques. More on long polling here.
http://en.wikipedia.org/wiki/Comet_(programming)
I'm already tossing around a solution but as I haven't done something like this before I wanted to check what SO thought before implementation.
Basically I need to modify an existing web based application that has approximately 20 users to add push notifications. It is important that the users get the notifications at the same time (PC-A shouldn't get an alert 20 seconds before PC-B). Currently the system works off of AJAX requests, sending to the server every 20 seconds and requesting any updates and completely rebuilding the table of data each time (even if data hasn't changed). This seems really sloppy so there's two methods I've come up with.
Don't break the connection from server-client. This idea I'm tossing around involves keeping the connection between server and client active the entire time. Bandwidth isn't really an issue with any solution as this is in an internal network for only approximately 20 people. With this solution the server could push Javascript to the client whenever there's an update and modify the table of data accordingly. Again, it's very important that every connected PC receives the updates as close to the same time as possible. The main drawback to this is my experience, I've never done it before so I'm not sure how well it'd work or if it's just generally a bad idea.
Continue with the AJAX request, but only respond in intervals. A second solution I've thought of would be to allow the clients to make AJAX requests as per usual (currently every 20 seconds) but have the server only respond in 30 second intervals (eg 2:00:00 and 2:00:30 regardless of how many AJAX requests it recieves in that span of time). This would require adjusting the timeout for the AJAX request to prevent the request timing out, but it sounds okay in theory, at least to me.
This is for an internal network only, so bandwidth isn't the primary concern, more so that the notification is received as close to each other as possible. I'm open to other ideas, those are just the two that I have thought of so far.
Edit
Primarily looking for pros and cons of each approach. DashK has another interesting approach but I'm wondering if anyone has experience with any of these methods and can attest to the strengths and weaknesses of each approach, or possibly another method.
If I understand well your needs I think you should take a look to Comet
Comet is a web application model in which a long-held HTTP request allows a web server to push data to a browser, without the browser explicitly requesting it. Comet is an umbrella term, encompassing multiple techniques for achieving this interaction. All these methods rely on features included by default in browsers, such as JavaScript, rather than on non-default plugins.
The Comet approach differs from the original model of the web, in which a browser requests a complete web page at a time.
How about using an XMPP server to solve the problem?
Originally designed to be an Instant Messaging platform, XMPP is a messaging protocol that enables users in the system to exchange messages. (There's more to this - But let's keep it simple.)
Let's simplify the scenario a little bit. Imagine the following:
You're a system admin. When the system
has a problem, you need to let all the
employees, about 20 of them, know that
the system is down.
In the old days, every employee will
ask you, "Is the system up?" every
hour or so, and you'll response
passively. While this works, you are
overloaded - Not by fixing system
outage, but by 20 people asking for
system status every hour.
Now, AIM is invented! Since every
employee has access to AIM, you
thought, "Hey, how about having every
single one of them join a 'System
Status' chat room, and I'll just send
a message to the room when the system
is down (or is back)?" By doing so,
employees who are interested in
knowing system status will simply join
the 'System Status' room, and will be
notified of system status update.
Back to the problem we're trying to solve...
System admin = "System" who wants to notify the web app users.
Employees = Web app users who wants to receive notification.
System Status chat room = Still, system Status chat room
When web app user signs on to your web app, make the page automatically logs them onto the XMPP server, and join the system status chat room.
When system wants to notify the user, write code to logon to the XMPP server, join the chat room, and broadcast a message to the room.
By using XMPP, you don't have to worry about:
Setting up "Lasting connection" - Some open source XMPP server, eJabberd/OpenFire, has built-in support for BOSH, XMPP's implementation of the Comet model.
How the message is delivered
You however will need the following:
Find a Javascript library that can help you to logon to an XMPP server. (Just Google. There're a lot.)
Find a XMPP library for the server-side code. (XMPP library exists for both Java & C#. But I'm not sure what system you're using behind the scene.)
Manually provision each user on the XMPP server (Seems like you only have 20 people. That should be easy - However, if the group grows bigger, you may want to perform auto-provisioning - Which is achievable through client-side Javascript XMPP library.)
As far as long-lasting AJAX calls, this implementation is limited by the at-most-2-connection-to-the-same-domain issue. If you used up one connection for this XMPP call, you only have 1 more connection to perform other AJAX calls in the web-app. Depending on how complex your webapp is, this may or may not be desirable, since if 2 AJAX calls have already been made, any subsequent AJAX call will have to wait until one of the AJAX pipeline freed up, which may cause "slowness" on your app.
You can fix this by converting all AJAX calls into XMPP messages, and have a bot-like user on the server to listen to those messages, and response to it by, say, sending back HTML snippets/JSON objects with the data. This however might be too much for what you're trying to achieve.
Ahh. Hope this makes sense... or not. :p
See http://ajaxpatterns.org/HTTP_Streaming
It allows You to push data from the server when server wants it. Not just after the query.
You could use this technique without making large changes to the current application, and synchronize output by the time on the server.
In addition to the other two great options above, you could look at Web Workers if you know they have latest Chrome, Safari, FF, or Opera for a browser.
A Worker has the added benefit of not operating in the same thread as the rest of the page, so performance will be better. The downside is that, for security purposes, you can only send string data between the two scripts and the worker does not have window or document context. However, JSON can be represented as a string, so there's really no limit to the data.
Workers can receive data multiple times and asynchronously. You set the onmessage handler to act each time it receives something.
If you can ask every user to use a specific browser (Latest Safari or Chrome), you can try WebSockets too.