Update: I am getting the impression that this is not even the right website to post this. If someone can point me in the right direction, I'd be appreciative...
I have an existing PHP+MySQL application that wasn't built to render "real-time" or similarly live-style data. But now I need to build in a way to pull nearly real-time data into the application and keep the data on the page fresh. This live data is only for 1 page in the application.
Looked at things like socket.io and PHP-based websockets libraries, but it seemed like overkill because the data is basically coming from 1 source and being delivered to 1 person (the client). Multiple other users could have this process running, but each one would bring their own data endpoint. That's... like a year down the road. But good to think about. Would ideally have hundreds, or thousands of users on the system, pulling their live-ish data. So I want this to be as streamlined and low-impact as possible.
Users must be authenticated and authorized to consume the data. This is already baked into the current system.
The API to get the data (which has already been built by another vendor) is also NOT streaming. It's set on a 20-second cron, so the new data is available every 20 seconds, which satisfies the client's needs.
My current plan is to do something like this...
Data is pulled on a cron every 20 seconds, organized, and stored into the database (complete)
Adjust #1 so it also does any additional proprietary calculations on data AND compiles + writes a JSON file on the server (unique to the user) which is the exact data needed for the front end (DB data is needed for other pages)
Create small PHP-based service which validates a client-provided JWT and reads the JSON file out
Write AJAX front end to poll endpoint from #3 every X seconds using a JWT for authorization
This all seems sort of like I might be reinventing the wheel, or missing something. The fact that this is an existing PHP based application (LAMP) does have some limiting factors, but I feel like there's got to be a more efficient way to handle this... It's pretty new to me. Also, I'm open to other technologies that'll run on the LAMP stack, if it'll make things better.
I would say go for the API solution in the beginning :) Since it fits the architecture more and is for sure the least amount of work. Also if there will be problem with the "live" feeling of the data you can fix it by polling more often or introducing long polling, assuming you change the cron job time.
I mean in the end it is all about impact for the time spent, don't start implement features that customers don't care about :)
The biggest problem to solve is to implement it in a way that fits your requirements and is somewhat future extendable. You still have to deal with issues like resolution, time outs, reducing server processing when requesting data and so on!
For me, if you need to maintain a global service state because a single client(s) request could affect all other connected client request(s) then most all server-side scripting languages are not the best choice! Also to further add, if you plan on implementing something like this with PHP, you will be setting your self up for a living nightmare! Why, because simply put, PHP(s) socket(s) implementation is that bad!
Related
I've read a few StackOverflow posts related to this subject but I can't find anything specifically helps me in my scenario.
We have multiple monitoring instances within our network, monitoring different environments (Nagios, Icinga, more...). Currently I have a poller script written in PHP which runs every minute via cron, it asks the instance to return all of its problems in JSON, the script then interprets this and pushes it in to a MySQL database.
There is then an 'overview' page which simply reads the database and does some formatting. There's a bit of AJAX involved, every X seconds (currently use 30) it checks for changes (PHP script call) and if there are changes it requests them via AJAX and updates the page.
There's a few other little bits too (click a problem, another AJAX request goes off to fetch problem details to display in a modal etc).
I've always been a PHP/MySQL dev, so the above methodology seemed logical to me and was quick/easy to write, and it works 'ok'. However, the problems are: database constantly being polled by many users, mesh of javascript on the front end doing half the logic and PHP on the back doing the other half.
Would this use case benefit from switching to NodeJS? I've done a bit of Node.JS before but nothing like this. Can I subscribe to MySQL updates? Or trigger them when a 'data fetcher' pushes data in to the database? I've always been a bit confused as I use PHP to create data and javascript to 'draw' the page, is there still a split of NodeJS doing logic and front end javascript creating all the elements, or does NodeJS do all of this now? Sorry for the lack of knowledge in this area...
This is definitely an area where Node could offer improvements.
The short version: with websockets in the front-end and regular sockets or an API on the back-end you can eliminate the polling for new data across the board.
The long version:
Front-end:
You can remove all need for polling scripts by implementing websockets. That way, as soon as new data arrives on the server, you can broadcast it to all connected clients. I would advise Socket.io or the Primus websocket wrapper. Both are very easy to implement and incredibly powerful for what you want to achieve.
All data processing logic should happen on the server. The data is then sent to the client and should be rendered on the existing page, and that is basically the only logic the client should contain. There are some frameworks that do all of this for you (e.g. Sails) but I don't have experience with any of those frameworks, since they require you to write your entire app according to their rules, which I personally don't like (but I know a lot of developers do).
If you want to render the data in the client without a huge framework, I highly recommend the lightweight but incredibly useful Transparency rendering library. Using this, you can format a Javascript object on the server using Node, JSONify it, send it to the client, and then all the client would have to do is de-JSONify it and call Transparency's .render.
Back-end:
This one depends on how much control you have over the behaviour of the instances you need to check. I assume you have some control, since you can get all their data in a nice JSON format. So, there are multiple options.
You can keep polling every so often. This is the easiest solution since it requires no change to the external services. The Javascript setInterval function is very useful here. Depending on how you connect with the instances, you might be able to use a module like Request to do the actual request, so that takes out a bunch more of the heavy lifting.
The benefit of implementing the polling in your Node app as well, is that you will receive the data in your Node app and that way you can immediately broadcast it to the clients, even before inserting it into a database. This will greatly reduce the number of queries on your database.
An alternative to polling would be to set up a simple Express-based API where the applications can post their 'problems', as you call them. This way your application will get notified the moment a problem occurs, and combined with the websockets connection to the client this would result in practically real-time updates.
To be more redundant, you would have a polling timer alongside the API, so that you can check the instances in case there's something wrong that causes them to not send over any more data.
An alternative to the more high-level API would be to just use direct socket communication, which is basically the same approach only using a different set of functions.
Lastly, you could also keep the PHP-based polling script. This would be the most efficient solution since you wouldn't go and replace everything. Then from the Node app that's connected to the clients with websockets, you could set an interval to query the database every so often and broadcast the updates. This will still greatly reduce the number of queries, since no matter how many clients are connected there will only be one query, the response of which then gets sent to all connected clients.
I hope my post has give you some ideas of how you could implement your application using Node. Keep in mind though that I am just one developer, this is how I would approach building your application in Node. There will definitely be others who have different opinions.
So I have a Rails app (which in this case seems like it would be irrelevant, but I'll mention it anyway). It's a sort of chat room application.
In order to tell which users are currently in a chat room, I've been using Javascript polling.
So a simple
$(function() {
setTimeout(updateUsers, 15000);
});
where updateUsers just calls an AJAX get request to pull the array of users currently in the chatroom.
Here's my question: 15 seconds is a pretty long time to wait to poll. How frequently should I do it without performance issues? Obviously it depends on a lot of factors, but I'd like to hear those factors. I've seen a bunch of similar questions for receiving messages in chat rooms, but none yet for lists of users, which is why I'm asking this question.
It depends on a ton of things, like your infrastructure, the number of expected users, etc. Even if we had those numbers, it's hard to tell what would be a good timeout.
If you are only sending out a simple JSON array with the list of users, I'd say experiment with a 3-5 seconds delay and check from there. This is a problem of premature optimization- you're trying to solve a problem you don't yet have.
There are, however, two other possible solutions:
You could only send the difference. When you poll, you return a message saying which users have connected and which have left since the last polling. This requires some kind of server tracking, but can be done.
The other solution would be to not use polling at all, and use a more modern technology like WebSockets / Long-polling. Those will allow the server itself to send messages to your clients. As such, you can send them an initial list when they connect, and a single minimal message everytime someone else connects / leaves. A great solution to this in a Node environment is Socket.IO. I'm not much of a Ruby guy so I don't know if anyone has done something similar but I wouldn't be surprised if someone had ported the whole thing to Rails. Search around, I'm sure you'll find something that fits your needs.
Anything more frequent adds an additional load, albeit the server, the client or both.
Having said that, I don't think there's a "Sweet spot" (to which it appears you're referring). However, you can look in to Ruby Push API which basically keeps a connection open at all times and sends data only when necessary. (Having searched a little further, there appears to also be a Juggernaut plugin, too.)
I think you should you some comet technology
http://en.wikipedia.org/wiki/Comet_(programming))
Or add some function that look for the average response time and change the interval dynamic. Maybe the server could tell the client that "I have much to do, please wait 30 sec until next request".
Imagine a space shooter with a scrolling level. What methods are there for preventing a malicious player from modifying the game to their benefit? Things he could do that are hard to limit server-side is auto-aiming, peeking outside the visible area, speed hacking and other things.
What ways are there of preventing this? Assume that the server is any language and that the clients are connected via WebSocket.
Always assume that the code is 100% hackable. Think of ways to prevent a client completely rewritten (for the purposes of cheating) from cheating. These can be things such as methods for writing a secure game protocol, server-side detection, etc.
The server is king. Clients are hackable.
What you want to do is two things with your websocket.
Send game actions to the server and receive game state from the server.
You render the game state. and you send input to the server.
auto aiming - this one is hard to solve. You have to go for realism. If a user hits 10 headshots in 10ms then you kick him. Write a clever cheat detection algorithm.
peeking outside the visibile area - solved by only sending the visible area to each client
speeding hacking - solved by handling input correctly. You receive an event that user a moved forward and you control how fast he goes.
You can NOT solve these problems by minifying code. Code on the client is ONLY there to handle input and display output. ALL logic has to be done on the server.
You simply need to write server side validation . The only thing is that a game input is significantly harder to validate then form input due to complexity. It's the exact same thing you would do to make forms secure.
You need to be really careful with your "input is valid" detection though. You do not want to kick/ban highly skilled players from your game. It's very hard to hit the balance of too lax on bot detection and too strict on bot detection. The whole realm of bot detection is very hard overall. For example Quake had an auto aim detection that kicked legitedly skilled players back in the day.
As for stopping a bots from connecting to your websocket directly set up a seperate HTTP or HTTPS verification channel on your multiplayer game for added security. Use multiple Http/https/ws channels to validate a client as being "official", acting as some form of handshake. This will make connecting to the ws directly harder.
Example:
Think of a simple multiplayer game. A 2D room based racing game. Upto n users go on a flat 2D platformer map and race to get from A to B.
Let's say for arguments sake that you have a foolsafe system where there's a complex authetication going over a HTTPS channel so that users can not access your websocket channel directly and are forced to go through the browser. You might have a chrome extension that deals with the authentication and you force users to use that. This reduces the problem domain.
Your server is going to send all the visual data that the client needs to render the screen. You can not obscure this data away. No matter what you try a silled hacker can take your code and slow it down in the debugger editing it as he goes along until all he's left with is a primitive wrapper around your websocket. He let's you run the entire authentication but there is nothing you can do to stop him from stripping out any JavaScript you write from stopping him doing that. All you can achieve with that is limit the amount of hackers skilled enough of accessing your websocket.
So the hacker now has your websocket in a chrome sandbox. He sees the input. Of course your race course is dynamically and uniquely generated. If you had a set amount of them then the hacker could pre engineer the optimum race route. The data you send to visualise this map can be rendered faster then human interaction with your game and the optimum moves to win your racing game can be calculated and send to your server.
If you were to try and ban players who reacted too fast to your map data and call them bots then the hacker adjusts this and adds a delay. If you try and ban players who play too perfectly then the hacker adjusts this and plays less then perfect using random numbers. If you place traps in your map that only algorithmic bots fall into then they can be avoided by learning about them, through trial and error or a machine learning algorithm. There is nothing you can do to be absolutely secure.
You have only ONE option to absolutely avoid hackers. That is to build your own browser which cannot be hacked. Build the security mechanisms into the browser. Do not allow users to edit javascript at runtime in realtime.
At the server-side, there are 2 options:
1) Full server-side game
Each client sends their "actions" to the server. The server executes them and sends relevant data back. e.g. a ship wants to move north, the server calculates its new position and sends it back. The server also sends a list of visible ships (solving maphacks), etcetera.
2) Full client-side game
Each client still sends their actions to the server. But to reduce workload on the server, the server doesn't execute the actions but forwards them to all other clients. The clients then resolve all actions simultaneously. As a result, each client should end up with an identical game. Periodically, each client sends their absolute data (ship positions, etc.) to the server and the server checks if all client data is identical. Otherwise, the games are out of sync and someone must be hacking.
Disadvantage of the second method is that some hacks remain undetected: A maphack for example. A cheater could inject code so he sees everything, but still only sends the data he should normally be able to see to the server.
--
At the client-side, there is 1 option:
A javascript component that scans the game code to see if anything has been modified (e.g. code modified to render objects that aren't visible but send different validation data to the server).
Obviously, a hacker could easily disable this component. To fix that, you could force the client to periodically reload the component from the server (The server can check if the script file was requested by the user periodically). This introduces a new problem: the hacker simply periodically requests the component via AJAX but prevents it from running. To avoid that: have the component redownload itself, but a slightly modified version of itself.
For example: have the component be located at yoursite/cheatdetect.js?control=5.
The server will generate a slightly modified cheatdetect.js so that in the next iteration, cheatdetect.js?control=22 (for example) must be downloaded. If the control mechanism is sufficiently complicated, the hacker won't be able to predict which control number to request next, and cheatdetect.js must be executed in order to continue the game.
There's nothing you can really do to prevent anyone from modifying your JS or writing a GreaseMonkey script. However you can make it hard for them by minifying your script as well as making your code as cryptic as possible. Maybe even throwing in some fake methods or variables that do nothing but are used to throw an attacker off. But given enough time, none of these methods are completely foolproof, as once your code goes to the client, it is no longer yours.
The only way I can even think of implementing this is by modifying your Javascript to function as a client and then designing a central server mechanism to validate data sent from that client. This is probably a big change to implement and will most likely make your project more complex. However, as was said earlier, if the application runs entirely on the client, the client can pretty much do whatever they want with your script. The only way to secure it to use a trusted machine to handle validation.
They don't have to touch your client-side code -- they could just sniff and implement your Websocket protocol and write a tiny agent that pretends to be a human player.
Update: The problem has a few parts, and I don't have answers off the top of my head, but the various options could be evaluated with these questions in mind:
How far are you willing to go to prevent cheating? If you only care about casual cheating, how many barriers are enough to discourage the casual cheater? The intermediate Javascript programmer? A serious expert? Weighing this against the benefits of cheating, is there anything of real value at stake, like cash and prizes, or just reputation?
How do you get a high confidence that a human is providing inputs to your game? For example, with a good enough computer vision library I could model your game on a separate machine feed inputs to the computer pretending to be the mouse, but this has a high relative cost (not worth my time).
How can you create a chain of trust in your protocol such that knowledge of (2) can be passed to the server, and that your server is relatively confident your client code is sending the messages?
Sure many of the roadblocks you throw up can be side-stepped, but what is the cost to the player and you? See "Attrition warfare".
Some other methods that can be implemented:
Make the target elements difficult for a script to distinguish from other elements. Avoid divs with predictable class and id names if possible. Inject styling using JavaScript instead of using classes. Think like a hacker and make it hard on yourself.
Use decoys that a script will fire on. For instance, if the threat vector is a screen scraping algorithm using pixel colors, throw some common pixel colors in non-target elements. Hits on these non-targets could seem inconsequential to the cheater, but would be detectable. You don't want the cheater to know why you know.
Limit the minimum time between actions to slightly below the best human levels. The best players will hit that plateau, and it won't matter as much who's cheating, and immediately be able to detect anyone scripting faster than that by side-calling method calls.
Random number generators are typically uniform. Human nature is not. Likely a random number generator will have values within a set limit and even distribution. Natural distribution is a Gaussian curve. If you sampled the distribution and it looks like a square wave in the x and y axis, 100% it's a cheater. This will be fairly difficult for the cheater to detect the threshold for the algorithm because it's a derivative of the random, and not the random distribution itself. You're also using aggregate data and not individual plays to detect it, so reverse engineering the algorithm would be extremely difficult without knowing your detection algorithm.
Utilize entropy whenever possible. Avoid predictable game plays. Imagine a racing game on a set collection of race tracks. Each game play could have slightly differing levels of traction, horsepower, and momentum. The script would have to be extremely good to beat it. In a scrolling game, you can alter factors that are instinctual to humans, but difficult for computers, such as wind force, changes in gravity, etc. It would also make it more fun as a side benefit.
Server generated tokens can be used to validate UI elements were used and not calls to the code itself. Validation can be handled in one call at the end of the game comparing events to hashed codes of UI elements. The token should be a hash with a server private key and some value of the UI element.
Decoy the cheater with data they think you're using to detect cheats. Such as calls to a DetectCheat method with dummy calls to a fake backend. It's the old magician's trick. Wave your hand over here, while you slip a card into the deck with the other hand. Let them waste days on end in a maze that has no exit, with lot's of hair pulling.
I'd use a combination of minification and AJAX. If all of the functions and data aren't loaded into the page, it'd be more difficult to cheat.
On the other hand, modding turned out to be a very profitable tool for companies like Id Software. Perhaps allowing the system to be modded might make the game that much more enjoyable to the community at large.
Obfuscate your client exposed code as much as possible. Additionally, use some magic.
You can edit the javascript on the browser and make it work.
Some people suggest that make a call to check with the server. So after making a call to the server, it will be validated in the server. Once validated, it will come to client side and do actions. But I think even this is not foolproof.
For eg.,. for a Basic login action : in angular while making a call to server, the backend validates username & pwd and if validated, it will come back to the client and let the user login using angular.
When I say login using angular, it is going to store things in cookies, like user objects and other things. But still the user can remove the JS code which is making the call to backend, and return TRUE(wherever needed) and insert user object(dummy) to cookies and other objects(whatever needed) and login. It is a very difficult thing to do, but it is doable. In many scenarios, this is not desirable even if it takes hours to edit/hack the code.
This is possible in single page applications, where JS files dont get reloaded for each page. To mitigate the possibility of getting hacked we can use minified codes. And I guess if actions like this is done in backend(like login in Django) it is much safer.
Please correct me if I am wrong.
I came across a site that does something very similar to Google Suggest. When you type in 2 characters in the search box (e.g. "ca" if you are searching for "canon" products), it makes 4 Ajax requests. Each request seems to get done in less than 125ms. I've casually observed Google Suggest taking 500ms or longer.
In either case, both sites are fast. What are the general concepts/strategies that should be followed in order to get super-fast requests/responses? Thanks.
EDIT 1: by the way, I plan to implement an autocomplete feature for an e-commerce site search where it 1.) provides search suggestion based on what is being typed and 2.) a list of potential products matches based on what has been typed so far. I'm trying for something similar to SLI Systems search (see http://www.bedbathstore.com/ for example).
This is a bit of a "how long is a piece of string" question and so I'm making this a community wiki answer — everyone feel free to jump in on it.
I'd say it's a matter of ensuring that:
The server / server farm / cloud you're querying is sized correctly according to the load you're throwing at it and/or can resize itself according to that load
The server /server farm / cloud is attached to a good quick network backbone
The data structures you're querying server-side (database tables or what-have-you) are tuned to respond to those precise requests as quickly as possible
You're not making unnecessary requests (HTTP requests can be expensive to set up; you want to avoid firing off four of them when one will do); you probably also want to throw in a bit of hysteresis management (delaying the request while people are typing, only sending it a couple of seconds after they stop, and resetting that timeout if they start again)
You're sending as little information across the wire as can reasonably be used to do the job
Your servers are configured to re-use connections (HTTP 1.1) rather than re-establishing them (this will be the default in most cases)
You're using the right kind of server; if a server has a large number of keep-alive requests, it needs to be designed to handle that gracefully (NodeJS is designed for this, as an example; Apache isn't, particularly, although it is of course an extremely capable server)
You can cache results for common queries so as to avoid going to the underlying data store unnecessarily
You will need a web server that is able to respond quickly, but that is usually not the problem. You will also need a database server that is fast, and can query very fast which popular search results start with 'ca'. Google doesn't use conventional database for this at all, but use large clusters of servers, a Cassandra-like database, and a most of that data is kept in memory as well for quicker access.
I'm not sure if you will need this, because you can probably get pretty good results using only a single server running PHP and MySQL, but you'll have to make some good choices about the way you store and retrieve the information. You won't get these fast results if you run a query like this:
select
q.search
from
previousqueries q
where
q.search LIKE 'ca%'
group by
q.search
order by
count(*) DESC
limit 1
This will probably work as long as fewer than 20 people have used your search, but will likely fail on you before you reach a 100.000.
This link explains how they made instant previews fast. The whole site highscalability.com is very informative.
Furthermore, you should store everything in memory and should avoid retrieving data from the disc (slow!). Redis for example is lightning fast!
You could start by doing a fast search engine for your products. Check out Lucene for full text searching. It is available for PHP, Java and .NET amongst other.
I am working on a simple notification service that will be used to deliver messages to the users surfing a website. The notifications do not have to be sent in real time but it might be a better user experience if they happened more frequently than say every 5 minutes. The data being sent to and from the client is not very large and it is a straight forward database query to retrieve the data.
In reading other conversations on the topic it would appear that an AJAX push can result in higher server loads. Since I can tolerate longer server delays is it worth while to have the server push notifications or to simply poll.
It is not much harder to implement the push scenario and so I thought I would see what the opinion was here.
Thanks for your help.
EDIT:
I have looked into a simple AJAX Push and implemented a simple demo based on this article by Mike Purvis.
The client load is fairly low at around 5k for the initial version and expected to stay that way for quite some time.
Thank you everyone for your responses. I have decided to go with the polling solution but to wrap it all within a utility library so that if they want to change it later it is easier.
I'm surprised noone here has mentioned long-polling. Long polling means keeping an open connection for a longer period (say 30-60 seconds), and once it's closed, re-opening it again, and simply having the socket/connection listen for responses. This results in less connections (but longer ones), and means that responses are almost immediate (some may have to wait for a new polling connection). I'd like to add that in combination with technologies like NodeJS, this results in a very efficient, and resource-light solution, that is 100% browser compatible across all major browsers and versions, and does not require any additional tech like Comet or Flash.
I realize this is an old question, but thought it might still be useful to provide this information :)
Definitely use push its much cooler. If you just want simple notifications I would use something like StreamHub Push Server to do the heavy-lifting for you. Developing your own Ajax Push functionality is an extremely tricky and rocky road - you have to get it working in all browsers and then handle firewalls and proxies killing keep-alive connections etc... Why re-invent the wheel. Also, it has a similarly low footprint of less than 10K so it should suit if that is a priority for you.
Both have diferent requirements and address diferent scenarios.
If you need realtime updates, like in an online chat, push is a must.
But, if the refresh period is big, as it is in your case (5 minutes), then pool is the appropriate solution. Push, in this case, will require a lot of resource from both the client and the server.
Tip! try to make the page that checks the pool fast and clean, so it doesn't consumes a lot of resources in the server in each request. What I usually do is to keep a flag in memory (like in a session variable) that says if the pool is empty or not... so, I only do havy look in the pool only if it is not empty. When the pool is empty, which is most of the time, the page request runs extremely fast.
Because using a push requires an open HTTP connection to be maintained between your server and each client, I'd go for poll as well - not only is that going to consume a lot of server resources but it's also going to be significantly more tricky to implement as matt b mentioned.
My experience with polling is that if you have a frequent enough polling interval on a busy enough site your web server logs can get flooded with poll requests real quickly.
Edit (2017): I'd say your choices are now are between websockets and long polling (mentioned in another answer). Sounds like long polling might be the right choice based on the way the question mentions that the notifications don't need to be received in real time, an infrequent polling period would be pretty easy to implement and shouldn't be very taxing on your server. Websockets are cool and a great choice for many applications these days, sounds like that might be overkill in this case though.
I would implement a poll just because it sounds simpler to write, and keeping it simple is very valuable.
Not sure if you have taken a look at some of the COMET implementations out there (is that what you mean by AJAX push).
If the user is surfing the site, won't that in effect be requesting information from the server that this notification can piggy-back on?
It's impossible to say whether polling will be more expensive then pushing without knowing how many clients you'll have. I'd recommend polling because:
It sounds like you want to update data about once per minute. Unless notifications are able to arrive at a much faster rate than that, pushing would mean you're keeping an HTTP connection open but seeing very little activity on it.
Polling is built on top of existing HTTP conventions, so any server that talks to web browsers is already ready to respond to ordinary Ajax requests. A Comet– or Flash socket–based solution has different requirements; you'll need something like cometd on the server side and a client-side library that groks server-side push.
So if you needed something heavy-duty to manage a torrent of data and a crapload of clients, I'd recommend Comet. But that doesn't seem to be the case.
There's now a service http://pusherapp.com that is trying to solve this problem once and for all, in a blink. Might be worth checking out. (disclaimer: i am in no way associated with them).
I haven't tried it myself, but some say COMET works and is easier than you think. There's also a Ruby on Rails plug-in called Juggernaut that I've heard talked about highly. Again, I haven't used it, so YMMV, but my understanding is that it takes far fewer resources compared to polling. I believe (can someone confirm?) that COMET is how MacRumorsLive.com delivers live blogging of WWDC Stevenotes.