We have executed the Performance tests. Started with 25 Users and we see application crashes and application does not respond for 15 users.
Error percentage gradually increased from 0 to 100 for 15 users in 2 to 4 minutes. Below are the errors and snaps for performance tests.
Errors:
• Server side errors
• The resource cannot be found.
• HTTP 404 errors.
Can you give some pointers to improve the performance?
Well, there could be many reasons for the failure. Did you keep JMeter and the server in the same machine?
If yes, probably that is the reason for failures as both JMeter & Server consume resources.
If not, then check the following settings.
Following are the Checklist/Suggestions/Pointers:
Client Side: Simulate the anticipated load - neither less not more (There may a chance of wrong test design/scripting which resulted in more/less than the anticipated load)
Ramp-Up: Check how 25 threads are launched. There should be enough ramp-up time (say 25 threads in 2 minutes) unless you are conducting Stree/Spike test.
User Think Time: Check whether you added any timer b/w the transactions or not. If not, you are generating the more load than the anticipated. Add Timer b/w transactions to replicate real-time scenario.
Pacing: Check whether you are giving any time b/s each iteration. It not, It fastens the execution speed.
Misplaced timers (scope): Check whether your timer is applicable to all the samplers/transactions.
Cache configuration: Configure cache for static resources using HTTP Cache Manager, so that from second iteration JMeter uses its cache instead of requesting the server. This decision must be taken only if the server is allowing the client to cache the resources (check Cache-control and other related headers). Otherwise, this configuration can be ignored.
Parallel requests: If you are using Parallel Downloads field in HTTP Sampler, don't keep it more than 6 (modern browsers use multi-threading to download resources in parallel).
Above factors, if misconfigured, can result in unwanted load.
Server Side:
Scarcity of resources in Server machines: Use nmon for Linux, PerfMon for Windows. Analyze the results and find which resource is causing the trouble i.e., CPU, Memory, Network, HardDisk I/O. (Most common reason for server crash)
Misconfiguration of the server: Check for maxThreads, minThreads, maxConnections, Keep-Alive timeout, connection timeout etc. and tweak as per the needs.
Possible solution:
If resources are the bottleneck and you are generating the anticipated load, then either you have to scale in (add resource which is the bottleneck to the existing machine) or scale out (deploy the app in more server machines)
Related
In the Chrome developer tools, I notice that I have 6 TCP connections to a particular origin. The first 5 connections are idle from what I can tell. For the last of those connections, Chrome is making a call to our amazon S3 to get some images per the application logic. What I notice is that all the requests for that connection are queued till a certain point of time (say T1) and then the images are downloaded. Of course, this scenario is hard to reproduce, so I am looking for some hints on what might be going on.
My questions:
The connection in question does not have the "initial connection" in the timing information, which means that the connection might have been established before in a different tab. Is that plausible?
The other 5 connections for the same origin are to different remote addresses. Is that the reason they cannot be used to retrieve images that the 6th connection is retrieving?
Is there a mechanism to avoid this queueing delay in this scenario on the front end?
From the docs (emphasis mine)
A request being queued indicates that:
The request was postponed by the rendering engine because it's considered lower priority than critical resources (such as
scripts/styles). This often happens with images.
The request was put on hold to wait for an unavailable TCP socket that's about to free up.
The request was put on hold because the browser only allows six TCP connections per origin on HTTP 1. Time spent making disk cache
entries (typically very quick.)
This could be related to the amount of images you are requesting from your amazon service. According to this excerpt, requests on different origins should not impact each other.
If you are loading a lot of images, then considering sprite sheets or something may help you along - but that will depend on the nature of the images you are requesting.
Seems like you are making too many requests at once.
Since there is restriction on maximum number of active requests to 6 in HTTP 1.1 all other requests will get queued until the active requests get completed.
As alternative, you can use HTTP 2 / Speedy at Server which dosen't have any such restriction and many other benefits for applications making huge number of parallel requests.
You can easily enable HTTP 2 on nginx / apache.
I'm not new in load testing. My job is to create a load testing with jmeter. I did not bother with tread number, clients, memory consumption ,etc. For jmeter load testing i used 3 client , each of them run 1000 threads (30% CPU was used), results were expected all the time. No problems were detected on SUT or clients.
I'm put before new challenge. Now i have to execute web performance testing on browsers.I don't know if i create a correct picture about performance testing.
I want to measure, first byte, load time on web page, java script, Ajax, etc,...
Web automation tests are written in selenium. Selenium is not mentioned for web load testing.
Lets say to simulate 1.000 users clicking ob browser, this means a lot of VM, collecting results from browsers(DOM counters), SUT is managed by PS script to take data from perf. counters. I see a problem a lot of VM means also financial budget which i do not have.
I'm in doubt if the upper approach is correct or should change my approach.
One of these tools is Visual Studio Ultimate edition. But i still need a lot of VM to simulate 1,000 users - browsers.
On the internet i read a documents, descriptions, top tools.
How do you do changeling web page performance testing on you company.
Any help about web page performance will be appreciated.
Thanks,
Dani
Well-behaved JMeter script must look exactly like real browser does from the server's perspective, just make sure you properly handle the next few bits:
Embedded resources via "Advanced" tab of the HTTP Request Defaults
Cache via HTTP Cache Manager
Cookies via HTTP Cookie Manager
Headers via HTTP Header Manager
AJAX requests via Parallel Controller
See How to make JMeter behave more like a real browser article for more details.
Assuming all above you should be able to mimic the load on HTTP protocol level. If you need to measure rendering or JS execution speed additionally you can add another Thread Group with 1 (or several threads) and use WebDriver Sampler which provides JMeter integration with Selenium. Additionally you can collect some extended stats using i.e. Navigation Timing API via WDS.browser.executeScript() method allowing execution of arbitrary JavaScript code.
Per the title, is there a maximum number of Get requests?
I need to make a couple hundred get requests to a rest API in order to dynamically load data into webpage, but I find that if I make a Promise.All array and output the promise result in the .then, eventually I get undefined due to request time outs.
Is this due to a limit on the number of connections? Is there a best practice for making large number of simultaneous requests?
Thanks for your insight!
A receiving server has a particular capability for how many simultaneous requests it can handle. It could be a small number or a very large number depending upon a whole bunch of things including the server configuration, the server software architecture, the types of request being sent, etc...
If you're getting timeouts from the server, then you are probably sending enough requests that the server can't process all of them before whatever request timeout is configured (on either client or server) and thus you get a timeout error.
The usual way of handling this on the client is to control how many simultaneous requests you will send at once and then when one finishes, you can send the next and so on. You will have to test to find out what the capabilities are of the receiving server and then you should back off a bit from that to allow other load from other sources some room to execute while your requests are running.
Assuming your requests are not unusually heavy-weight things to do on the server, I would typically test 5 or 10 requests at a time and see how the receiving server handles that.
There's a discussion of a lot of options for controlling this here:
Promise.all consumes all my RAM
Make several requests to an API that can only handle 20 request a minute
Concurrency control is also part of Promise.map() in the Bluebird promise library.
Is there a maximum number of Get requests?
Servers are limited on how many requests they can handle at once for a whole variety of reasons. Every server setup will likely be different and it also depends upon the types of requests you're sending too (and what they have to do). Some servers may be able to handle hundreds of thousands of requests (probably because there's a cluster behind them and they're configured for big load). Smaller configurations may only handle dozens at a time.
Is this due to a limit on the number of connections?
Any receiving server will have a limit on how many incoming connections it will allow to queue. What that is will depend upon many factors and there is no way for you (from the outside) to know exactly what that limit is. Timeout errors usually don't mean you're hitting this limit.
I recently started working on ionic framework using angular JS. Here is my problem with $http.post
My requirement is I need to upload photos to my server. User selected bunch of photos (say 15), and starts uploading to server. Here is my code for uploading to server
foreach(photo in photoList){
$http.post(url,photo).then(success(){},error(){})
}
Now my problem is out of 15 only 6-7 photos are uploading. For remaining photos $http.post() calls are not even getting called. I heard there might be $http concurrent issues. Is that correct?
If so how to resolve this issue?
You are being faced with a concurrency limit built into all web browsers by convention. Browsers will not make more than a certain small number, usually 6 (details here) of simultaneous connections to the same origin host. You cannot bypass this in javascript or with any technology available to a web application. It is for good reason as at some point the parallelism actually results in lower overall performance as the network saturates or other resource limits start to be hit. You must work within this limit. The browser will automatically queue your requests within the concurrent threshold, so you don't need to add any special code. However you can if you so choose impose your own concurrency limit in code if you want to provide more accurate progress to the end user.
I'm developing a small app using Ajax and http requests.
Currently i'm sending one request each second to server for checking if there are updates, and if there are, to get them and download data.
This timing profile is modified when a user interact with the app, but it's negligible.
I'm just wondering.. i could send an infinite loop of requests to the server. more the requests are often, more the app will be speedy. but then doesn't server get too many requests?
but how much is the right time from a request to another one?
I've read something about tokens, but i can't understand which is the better way to check if servers have some updates. thanks in advance
Long polling is one of the main options here. You should look into the server and see that there is good support for persistent HTTP connections if you expect to have a large number of users persistently connected.
For example, Apache webserver on its own opens a thread per connection, that can be a notable challenge with regard to many users with persistent connections, you end up with lots of threads (there are approaches to address it in Apache which you can research further if need be). Jetty, a java based web server (among my personal favorites), uses a more advanced network library that scales connections to threads much more logarithmically and efficiently, essentially putting connections to sleep until traffic in/out is detected.
Here are a few link:
http://en.wikipedia.org/wiki/Push_technology
http://en.wikipedia.org/wiki/Comet_(programming)
http://www.einfobuzz.com/2011/09/reverse-ajax-comet-server-push.html