Testing Internet connectivity problems such as hangs and timeouts with Javascript - javascript

I have a web app that I provide to small to medium-sized businesses as a service. Most of the support calls I get concerning speed issues end up being caused by problems on my clients' networks.
I would like to be able to point them to a specific web page that will run an x-minute test where multiple HTTP requests would be made via JavaScript every y seconds, which would then report back any problems such as longer-than-usual responses (>5 seconds) and timeouts. In theory this would mimic normal web app usage without taking into account things like database hits and would help reproduce intermittent hangs.
My question is two-fold:
Is this an effective way to test simple HTTP connectivity (beyond ping/tracert) to determine if the slowness a client is experiencing is HTTP related?
Does anyone know of any examples of this type of thing?
I'm not necessarily looking for code; I'm curious whether this is a viable solution to helping me more quickly diagnose my clients' connectivity issues.
Thanks.

You can make periodic calls with xmlhttprequest and you can accurately obtain some network diagnostics. For instance, you can determine how fast files are transferring by seeing how long it takes to download them. You can also make periodic tests and count the failure to determine packet loss. However a tool like fping is much much better at identifying packet loss. The "Same Origin Policy" for javascipr puts a serious limitation on what you can do. Having a client side application written in C++ or even Java (maybe an applet?) would be more accurate and could provide a wider verity of tests (like the tests that fping is capable of).

There are lots of sites that measure connection speeds by making you download some big image and measuring the time it takes until the browser gets it, and as far as I know that measurement is made using JavaScript.
So, if you google for internet speed test you'll probably get lots of inspiring examples for what you need.

Related

Best way to determine if new data available

There're many highload sites notify their users about new messages/topics runtime, without page reloading. How do they do that? Which approach do they use?
I assume there's two approaches:
"Asking" the server using JavaScript each time gap
Use websockets
By common opinion, the first one is too heavy for the server, since it produces too many requests.
About second one's behaviour in highload apps I know nothing, is it fine one?
So, which design approach to use to implement functions like "new msg available" properly without the need to reload the page?
The question rather about performance :)
WebSocket performance in the browser is not an issue, and on the server side there are performant implementations. As an example, Crossbar.io can easily handle 180k concurrent connections on a small server (tested in a VM on an older i5 notebook), and 10k/s messages - and both scale with the hardware (RAM and CPU respectively). Also: Something like Crossbar.io/Autobahn/WAMP gives you a protocol on top of WebSockets to handle the distribution of notifications to clients, making your life easier.
Full disclosure: I work for the company that works on Crossbar.io, and there are other WebSocket and PubSub solutions out there. Take a look at what best fits you use case and go with that.

How to handle massive computation on websites? Web workers or CGI?

I've written a JavaScript-based Website that is able to enter, edit and solve Nonograms. As you may know, solving a nonogram is an NP-complete problem.
My first attempt was pure (single threaded) JavaScript. But on larger nonograms, Chrome showed its BSOD and killed the JS script after a few minutes. Next attempt was to use Web Workers. I split the solving algorithm so that each worker gets one row/column to solve and returns the result. This was an improvement and it was able to solve medium size nonograms. But still, sometimes the browser killed the JS VM showing the BSOD after some time plus the website was not really responsive as I would have expected since this is what Web Workers are made for, aren't they?
Just for "fun", I ported the solving algorithm to Python and used ajax requests calling a python script instead of Web Workers. Interestingly, it was even slower than JavaScript but after a some time of computation, the request returned a 500 Internal Server Error. I believe this is due to the maximum execution time of a CGI script which is 30s on PHP afaik.
The CGI idea was not the best because when multiple users want to solve a nonogram, the server runs on 100% CPU, so I probably stick with client side computation.
So question is, what is the best way to do this computation (which could take like 10min for larger nonograms)? I think the execution time is not an issue as long as the web site stays response and as long as the browser does not kill the execution tasks.
In the meantime, I'm also trying to optimize the recursive algorithm....
Thanks!
Maybe you could add some delay before you post messages to the WebWorkers. If the process is split up in to small enough functions, it may be possible to keep the page responsive, though it will take a lot longer to solve.

End user experience monitoring tools

I have a web application with a great deal of both client-side and server-side logic. It is considered business-critical that this application feel responsive to the end user, for some definition of "feels responsive." ;)
Most website monitoring discussions revolve around keeping an eye on server-side metrics (response time, I/O queue depth, latency, CPU load, etc.), i.e. we tend to treat server performance and responsiveness as though it's a viable "proxy" for what the user is experiencing.
Unfortunately, as we move more and more logic to client side Javascript, the correlation decreases and our server metrics become less useful.
I didn't find any good matching SO questions on this. Googling gives a range of commercial products that might be related, but they're generally from the manufacturers' websites, full of unhelpful marketspeak and "please call us for details," so it's hard to know.
Are there any commonly-used tools for this sort of thing, other than rolling your own? Both free and commercial are welcome, although free is obviously better all else being equal.
EDIT: To clarify, I primarily need to gather bulk data on the user experience, including both responsiveness and breakage/script errors. Automatic analysis is a very-nice-to-have, although I'd expect to have to occasionally dig into the data myself regardless of the solution.
There are some freely available tools for performance monitoring. Yahoo open-sourced a script they used called Boomerang which can measure page load times and other performance metrics for end-users. Full documentation here. Google analytics also offers a basic page load time report.
For error monitoring, you'll want to listen for the window.onerror event. I don't know of any scripts that will automatically log it for you, or mine the logs on the server side. If you implement your own, you'll want to be very careful about not pinging the server too often--imagine how many requests it would generate if there was a JS error in your JS error handling code!
Bucky Client and Bucky Server, can perform that task :
http://github.hubspot.com/bucky/
From their website :
Open-source tool to measure the performance of your web app directly
from your users' browsers.
To analyse data they advise Graphite or OpenTSDB
You can try Atatus which provides Real User Monitoring(RUM) and Advanced error tracking for websites and web apps.
https://www.atatus.com/
http://www.whitefrost.com/documents/html/technical/dhtml/funmon.html#part1 tests the performance of javascript functions.
You can utilize Dynatrace Ajax for measuring and profiling the performance of the JavaScript in IE and Firefox. For Chrome, they have built in tools - take a look at:
http://blog.chromium.org/2011/05/chrome-developer-tools-put-javascript.html
For monitoring the performance of the overall application/site I would recommend synthetic monitoring utilizing real browsers, also known as web performance monitoring. These are services that have robotic agents sitting on Backbone ISPs performing the same activity as end users.
We utilize Catchpoint, which supports Selenium scripting. But there are others like Gomez and Keynote out there that have been providing such solutions for years.
You can also check out New Relic - now it has "real user monitoring" integrated - which measures the performance across all browser types. There is a 14 day trial period so you can set it up for free and see if you like it. You'll get visibility into browser rendering speed, DOM processing, the time it spends on the network, all the way back to your app performance on the server.

Best practice for "hidden" JavaScript HTTP request?

I'm not exactly sure how to formulate the question, but I think it's more of a suggestions request, instead of a question per se.
We are building an HTML5 service on which users get credited (rewarded, on social gaming lingo) for completing a series of offers. Most of these offers are video ad watching. We already have an implementation of this built on Flash, but for HTML5 I'm encountering a bit more issues on how to make the request calls to validate legit watched video ads. On the Flash interface, we have a series of HTTP requests that the SWF makes, some upon the video playback starts, in the middle and at the end, each one of those requests are related to each other, meaning, the response of one is needed on the next request, etc. Most of the logic to "hide" this "algorithm" is lightly hidden on the SWF binary, and it pretty much serves it purpose.
However, for HTML5 we have to rely on world visible JavaScript and that "hidden" logic is open wide. So, I guess this is a call for suggestions on how these cases are usually handled so that an skilled person could not (so easily) get access to it and exploit the service to get credited programmatically. Obfuscating the JavaScript seems like something that could help but that in no way protects fully.
There's of course some extra security on the backend (like frequency capping, per user capping, etc), but since our capping clears every day, an skilled person could still find a way to get credit for all available offers even without completing them.
It sounds like you want to ensure that your server can distinguish requests that happened as the result of the user interacting with your UI in ways you approve of from requests that did not happen that way.
There are a number of points of attack on such a system.
Inspect the JavaScript to find the event handler and invoke them via Firebug or another tool.
Inspect any keys from your code, and generate the HTTP requests without involving the browser.
Run code in the browser to programmatically generate events.
Use a 3rd-party tool that instruments the browser to generate clicks.
If you've got reasonable solutions to instrumentation attacks (3 and 4), then you can look at Is there any way to hide javascript functions from end user? for ways to get secrets into the client to allow you to sign your requests. Beyond that, obfuscation is the only (and imperfect) way to stop a not-too-determined attacker from any exploitation, and rate-limiting and UI event logging are probably your best bets for stopping determined attackers from benefiting from wide-scale fraud.
You will not be able to prevent a determined attacker (even with SWF, though it's more obfuscated). Your best bet is to make sure that:
Circumventing your measures is expensive in terms of effort, perhaps by using a computationally expensive crypto algorithm so they can't just set up a bunch of scripts to do it.
The payoff is minimal (user-capping is an example of how to reduce payoff; if you're giving out points, it's fine; if you're mailing out twenty dollar bills, you're out of luck)
Cost-benefit.

Is xmlHttpRequest imlementation optimized enough to send thousands of requests without page reload?

We need to develop a client application that has to update some values (about 10-20 integers) each second, over HTTP protocol (however, the HTTP server is running the same machine, so requesting 'localhost' is fast).
Since the UI must be easily modified, the decision was made to develop simply a HTML website and update the values using xmlHttpRequest (actually, using jQuery).
The problem is that the client is supposed to work continuously a whole year with no restart... This gives 3600 requests per hour, 86 400 per day, finally 30 758 400 requests per year. I'm very afraid how the browser will deal with such amount of requests... Has anyone any experience with "stresstest" of AJAX requests?
Would reloading the page once a day help the browser with cleaning up memory?
First issue: Javascript runtime performance varies from browser to browser. You'd be better off finding a fast, stable browser than worrying about jQuery's AJAX performance. I'd be much more worried about keeping an instance of a browser rendering and executing for a whole year.
Second issue: javascript in the browser isn't a timing-precise language. setInterval and setTimeout are unreliable and do not guarantee timing. jQuery's author wrote this article on Javascript time issues, so Javascript might lead to some problems if you need that really precise timing.
Third issue: if your client needs to run for 365 days without restarting, then aren't the sacrifices you're making by choosing to build an HTML/JS frontend somewhat silly for the loft goal of "easily editable?"
Make sure they're not using IE and you'll be off to a flying start.
xmlHttpRequest is able to handle that much queries (short-polling fw like comet does this), but ... lots of queries may have an impact on the browser's responding capabilities depending on the browser. You can easily scale down the number of queries by a 10-20 factor just by requesting all integers together (using json data structure for example). You may have a look at short-polling as well as it is designed for that kind of purpose.

Categories