I have a web application that needs to refresh some values often because the changes have to be available almost in real time. To do this, I run via ajax a refresh.php routine every 15 seconds, which returns the updated information. Time that increases if there is no user activity.
I thought about the possibility of creating a service-worker in the browser (since I already use it for pwa too), and in it create a web-socket, and then only when there is an update on the server, create a socket for the ip's of the users that are logged in (and saved in a db), just to send to the user's browser that there is an update, then the web-socket triggers the javascript routine that connects to the server and does the update.
I do not know if it would be possible to create the socket just to inform that there is an update, because in this case, I do not want to leave the socket open, creating the connection only when there is an update of the information, which should happen to several users.
Has anyone ever needed or done anything like this, or would you have any other ideas?
Related
I have used an EventSource method to get online status of active users on my website.
In this following JavaScript code is inserted in every page
var source = new EventSource("set_online.php");
Thus this code is executing set_online.php file continuously.
On server side i.e. in set_online.php following code executed
$query = "UPDATE my_db SET last_active = '{$current_time}' WHERE id = {$_SESSION["id"]}";
$result = mysqli_query($connection, $query);
Now I have two concerns about this:
As Database is updating last_active continuously in realtime, will it affect server load?
As connection is open as long as user is on website, will it create vulnerabilities?
SSE is not suitable for this purpose - or at least is not designed for it. SSE is a constant stream of events from server to browser.
Your script will work, though. The PHP script will do one thing (update the database) then exit. When it exits the connection is closed. The browser will see the connection has died, and after a few seconds will reconnect again. When the cycle repeats.
Regarding your two questions:
1. It is not really continuous, more a re-connect every 3 seconds. The server load might be significant.
2. The connection is not open continuously; but if it was it does not create any new vulnerabilities.
I would use an ajax call, on a JavaScript interval, instead of SSE. These advantages:
Older technology, so wider browser support
Explicit control over the timer interval, so you can control the balance between latency and server load.
Your solution is heavy and unnecessary, a better solution would be to reverse the idea and let the server push user status information to your clients.
You can achieve this by implementing sockets using libraries such as socket.io. It's quite simple to achieve and is a more scalable solution.
Basically, when the page is loaded, a connection will be made between your client and the server and when the server wants to communicate with clients he can simply emit an event such as user-status for example.
Your clients can simply listen to this event and update their views accordingly.
Hello I am developing an auction app like tophatter.com. I want to implement an application that has background process in it. I want this process to run forever until I stop it
http://eoction.com thatss our current site. The problem on our site when we refresh the page the auction also restart. We need something like a continuous process like tophatter.com if you refresh the page it will load the updated auction process.
I found this great service called pubnub. I am thinking we need a background process for this? This will process the auction on the pubnub blocks and then when we visit the site we will just need to query on its updated process?
Does pubnub support something like this?
PubNub Web Page Best Practices
When user refreshes your web app page or navigates to another page there are things you need to consider as a web app developer no matter what technologies you may be using. I will address, at a high level, the things you need to do when PubNub is integrated into your web page.
Restore Parameter
Whether the user interrupts your connection to PubNub or it is a network failure, you will want PubNub to reconnect and continue where it left off as much as possible. The PubNub JavaScript SDK has a initialization parameter called restore that when set to true, will reconnect to PubNub and get missed messages after the connection is dropped and reestablished.
var pubnub = new PubNub({
subscribeKey: "mySubscribeKey",
publishKey: "myPublishKey",
ssl: true,
uuid: getUUID();
restore: true
});
Reuse UUID
It is important to reuse the same UUID for each end user as this will allow PubNub to identify that user uniquely when it comes to Presence so that it doesn't produce new join events for the same end user. The PubNub JavaScript SDK actually generates a UUID and stores it in localStrorage and reuses it by default but very likely you have your own UUID that you would like to use for each of your end users.
Last Message Received Timetoken
If the network disruption is brief as is the case with a page refresh or page navigation, then missed messages are retrieved when restore:true is implemented in the init as stated above. But when the user is offline for more than say 5 minutes, you may want to retrieve missed messages on one or more channels. The best way to do this is to keep track of the timetoken of the last received message by storing it in localStorage every time a message is received via subscribe callback. When the user comes back online and it is has been more than 5 minutes since they were last online, call history using this last received message timetoken on each channel that you need to get missed message from.
Subscribe to Channels
Finally, you'll want to make sure that the user is subscribed to the channel they expect to be based on what their state prior to the connection disruption. If it is a page refresh, you likely just want to resubscribe them to the same list of channels. To do this, you just need to keep a list of channels they are currently subscribed to, once again, in localStorage. If the user navigates to a new page and this causes a full page reload (modern web apps should not require this, but...) then you may want to unsubscribe from some channel(s) and subscribe to new channel(s), it just depends on what that page navigation means to your app. Modern web app frameworks do not require full page reload for page navigation since the web app acts more like a desktop app than older web apps. And again, if the the user was offline for quite some time (more than 5 minutes) then it may not make sense to subscribe them to the same channels that they were subscribed to before. Really depends on your use case.
And by the way, Tophatter uses PubNub ;) but all of the above are generic best practice guidelines and recommendations and is not referencing any one app in particular.
EDIT: To address you question specifically, as pointed out in comments below...
You can't implement long-running process in PubNub BLOCKS (not currently, anyways), so you will need a server process for this. When the user refreshes the page, you just need to hit your server for current state. If using PubNub to keep this progress bar updated in realtime, you just subscribe to that channel that is sending the state of that progress bar and update your client. Using the same best practices I provided above are still necessary.
I'm having an issue with Socket.io receiving messages just before a page navigation happens - generally when the message is a direct result of some server-side action triggered by the navigation.
What I'm seeing right now looks like this:
Socket.io connects
User triggers a page navigation (submits a form, refreshes, etc.)
Server-side logic sends a request to the socket.io server, which immediately dispatches the event to the still-connected client
The client receives and confirms the request (I'm pretty sure there's some confirmation of messages built-in to socket.io, correct me if I'm wrong), and would display a notification to the user, but then ...
The socket.io connection closes on the original page
The new page loads and displays
Socket.io opens a new connection, but there's no new messages, because the last one was received and confirmed.
This isn't really a bug, per se, since I don't think it's reasonable to expect Socket.io to close the connection in advance of a navigation occurring. However, I'm not really sure what the best way to handle this is. Currently I keep one connection open per-client at a time, and close the other when a new one connects. This doesn't happen in this case though, since the first one has closed before the second one connects. I also could keep a list of all clients, but that wouldn't solve this problem either, because the message would still be received by the first connection.
Can someone suggest a solution to this problem that would ensure the user always sees a notification for the message?
Socket.io tracks logical connections with its own session IDs. If you watch the console when a client connects, you'll see the IDs:
info - handshake authorized Q9syoIK47JI7dACYpxiA
What's important to understand is that those IDs are per-page, and completely separate from HTTP sessions. The Socket.io client library simply holds its session ID in a JavaScript variable. Therefore, upon navigation, the ID is obviously lost.
So, upon navigation, this happens:
User is on a page connected to sio session 1.
We begin navigation to new page. On window.onbeforeunload, Socket.io initiates a synchronous XHR request to tell the server it is disconnecting. If it succeeds, the session (1) is immediately terminated; otherwise, the session will eventually time out.
A new page is loaded. It will connect to the Socket.io server and be assigned a new session ID, 2.
Anything you send to session 1 will obviously not be delivered since our client is now connected to session 2.
With the base Socket.io functionality, it is impossible to distinguish between a user who navigates between pages and a new user. In either case, the user will connect to a new Socket.io session.
Without knowing exactly how your app works or what you're trying to accomplish, it's hard to give a definitive recommendation for solving your problem.
Most likely, what you need to be able to do is associate a Socket.io session with the user's HTTP session. You can store notifications in a queue in the user's session, and delete them when they are displayed. There are two ways of doing this:
Since you're doing a full page load, you can send queued notifications directly down with the page itself. Delete the queue when you've successfully rendered.
On a new Socket.io connection, send unread notifications over the socket. Give .emit a callback function – this is the confirmation of delivery that Socket.io provides you. When delivery is confirmed, you can delete the notification queue from the user's HTTP session.
Simplest solution: Use the onbeforeunload event to disconnect socket.io.
I verified with a HTTP debugger that this event fires before the browser issues the request (at least in Chrome) so if socket.io is disconnected here, it won't receive any messages meant for the following page.
window.onbeforeunload = function() {
socket.disconnect();
};
I haven't tested this in multiple browsers, so it might not be a perfect solution, but it seems to solve the problem for now.
If there is a timeout set on one of our pages, and that same page is opened in another window/tab, is there a way to destroy/stop the timeout in the other window? We have employees who will use our system but open it again from their favorites. If they do this the already opened window will run the interval and then timeout. So while they are working in the new window they opened they will not be able to finish what they are doing because the other window timed them out.
Are there solutions to do this if a new window is opened?
In any sane web application, it is safe to have multiple windows open – especially in respect to session timeouts, because "session" state is managed by the server, not the client.
First, consider why web servers manage session state. HTTP was designed as a stateless protocol, which means any given request cannot conclusively identify who issued the request. This is fine for serving static resources, but is obviously not useful if we want to develop a more interactive app; Netscape later added cookies to their browser to address this.
Cookies solve the state problem (since the browser will issue consequent requests with the cookie[s]), but they are inherently insecure: a malicious client could modify a site's cookies. If, for example, upon login we set a cookie called uid to the user's ID, it would be trivial for someone to fake a cookie with uid=1, which might be your site's administrator account. Oops.
This is why web application frameworks invented the "session" construct. Each time a request is made with no cookie, the server creates a new (random) session key and sets the client's session cookie to that key. The web server keeps track of sessions and all state associated with each session. Important here is that the key itself contains no data, is large and random enough (has relatively high entropy), and is useless outside of your server. It is thus not possible to know how to change the key to gain access to other sessions.
Think of sessions as a large array – one item for each session, and a map of variables in that item. Conceptually, it might look something like this: (remember that this data resides on the server!)
session['safa4fwsa34rff4j9'] = { uid: 1, ... }
session['ajiokinmoi3235000'] = { uid: 4312, ... }
session['9lij34fff032e40k0'] = { uid: 9098, ... }
If I was signed in as user 1, my browser would send a cookie with sid=safa4fwsa34rff4j9. The server looks up this session, and passes the saved state ({uid:1}) on to your scripts. When your scripts are done, the server saves any changes back into its data store. (Session data is often kept in-memory, but in large sites, session data can be saved in a database.)
So what does all of this have to do with timeouts? This session data cannot be kept indefinitely because you'd eventually run out of storage space (whether that means running out of RAM or filling up the database your sessions are stored in).
Instead, the server also stores an expiration date & time with each session. Each time the session is accessed (by a client sending a request with the session's key), the expiration date is reset. The expiration date can be set anywhere from a seconds from now to years from now (depending on what server you're using). You configure how long you want your server to hang on to sessions; IIS defaults to 10 minutes, PHP to ~24 minutes.
In this model, the only thing that really matters is the last time a client issued any request, thus resetting his session's expiration/timeout. It wouldn't matter if multiple windows are open, because as long as one of them have accessed a page recently, all windows will still be active. If the session expires, then all windows are automatically expired when they make their next request.
Something that might muddy this issue is if you're doing some kind of AJAX polling, but the question doesn't indicate what technologies are being used. (#OP, it would be helpful if you included tags for your server stack.)
To summarize all of this: If you're doing any kind session management/expiration on the client, you're doing it wrong. Your app is likely insecure.
I'm currently fooling around with AJAX. Right now, I created a Markdown previewer that updates on change of a textarea. (I guess you know that from somewhere... ;-) ).
Now, I'm trying to figure out, how to update a page upon an event is fired from another client. So to say an asynchron message board. A user writes something, an event is called, the post is written.
But on the other clients' pages, the new post is of course not yet available until they reload and get the updated list of posts from the database.
Now, how can you get this to work asynchronously? So in that moment when one client does something, the other clients all get to know that he did something?
I don't think this can be done completely in AJAX, but I also have no idea whatsoever how to implement this on server-side, as it would require a page reload to inform the other clients of the event.
I'm thinking of creating a file or database entry that hashes the current state of data. Whenever a client loads the page, he saves this hash. Then, a timer (does this exist in JavaScript?) checks for the hash every few seconds.
As soon as anyone changes the databse, the hash is recalculated. If the script sees that the hash was changed and is different to the one saved, it reloads the contents form the database and saves the new hash.
Is that even going to work?
Polling that is light as possible is really the best solution here. Even if you did use a socket or something... That's still basically a live connection waiting around that will likely have to poll itself (albeit in a more effecient way).
20 queries in 10 minutes that have responses like {"updates":false} shouldn't even be putting a dent in your application. I mean imagine someone browsing your site requesting 20 pages and the related images/scripts/etc (even if some caching is involved), there could easily be hundreds of requests requiring all sorts of wasted database queries to information to be displayed on the page they don't actually care about.
You could use polling. For example each client might be sending continuous AJAX requests to the server say each 30 seconds to see if new posts are available and if yes, show them:
setInterval(function() {
// TODO: Send an AJAX request here to the server and fetch new posts.
// if new posts are available update the DOM
}, 30 * 1000);
On the other hand when someone decides to write a new post you send an AJAX (or not AJAX) request to the server to store this post in the database.
Another less commonly used approach is the concept of Comet and the HTML 5 WebSockets implementation which allow the clients to be notified by the server of changes using push.