When a JavaScript client application uses too much memory, the browser will either crash or throw an exception that can't be recovered from or swap like it's the 80s.
Do browsers signal that they almost reached the available memory limit for a tab?
Ideally, I'd love to be able to catch an event that I can intercept in JavaScript when the browser is running low on memory in order to automatically fall back to a light version of the application or tell my users to go buy a new computer / phone.
I know Chrome Performance Tools allow imprecise querying of the used memory, which is a first step, but probably not enough to detect memory limitations.
No, there's no cross-browser way to detect this unfortunately. This is discussed a little bit in this answer.
There is window.performance.memory but that is only available in Chrome.
I'm not aware of any really good workarounds for this either. You could perhaps check for old browsers or browsers that don't have particular features ("feature detection") and suggest that users with older browsers use your "light" version, since those are the people most likely to have low-powered devices.
Another possibility would be to see how long some particular operations take, and if they take too long then recommend the light version. Again a very blunt solution.
The answer is in speed of browser.
It doesn't show memory very correctly to prevent fingerprinting exactnes.
So, http://thisbeautiful.w3spaces.com/notbad.htm contains code to loop with an interval like this:
JavaScript:
momentum=Date.now();for(itr=1;itr<770;itr++){};momentumTwo=Date.now()-momentum;if(momentumTwo>3){ //take action
} //and every second if you wrap in into an interval
Reference this to get example.
Summary:the code sees how long it takes to loop things and, if crash, take action.
Using a program that monitors the browser system should be a better solution, as browsers themselves are not fully able do such a thing.
Related
I have a website where I want to ban all users not using a chrome and derivatives, firefox and derivatives, safari, or a new version of opera (IE and other old browsers may compromise security). Is there an absolutely foolproof way (so that even a hacker couldn't spoof their browser) to do this in JavaScript on the client or server side?
No, it can't be done reliably. You could use some complex JavaScript timing runs and compare the time to execute for the bank of tests against the known timings for each browser and version. Obviously this would have to be tested on a ratio basis to rule out differences in hardware performance. This would not, however, be foolproof and would require "fingerprinting" performance for every single browser version and make that you will accept and would need to be constantly updated as each new browser version was released. This is not a foolproof method and is generally considered a very bad idea. It can also result in false positives which will cause people with compliant browsers to be denied access based on variability in your JavaScript fingerprinting tests.
Hardening your server is a much better way to go and using simple browser identifiers like navigator name will provide the results you need. The trouble with trying to design a foolproof method is that, even if you get it to work, a cagey hacker can still get you. Hardening the Server is the only real way to secure your site.
This is a terrible security method
You should not be banking on the fact that a user is visiting your site from a particular browser. You should instead choose to either allow functionality to those compliant browsers and disable it to others via signatures (the part the browser sends which helps identify the particular browser type).
Most importantly, you should never leave a vulnerability in your application! If you know it's a vulnerability in different browser types, fix the problem - don't try to hide it. People will always find a way around a hidden problem. Easy fix? Make it so there is no problem to hide!
i am stuck with big problem i working on big project that is hanging browser automaticaly javacript executes
"how to detect how much memory javascript is using and clear the memory in regular interval.Is it posible?"
You don't have any way to play with memory. Javascript runs in a sandbox environment, so you have no access to memory management in any way. The garbage collector takes care of this, and you can somehow make it do what you want, but it's random. Don't count on it.
Rather, for your problem, you can use Chrome Inspector's Profiler.
What does it do? Well... it profiles the webpage you're in. You can see how long each function takes, and especially: where is your bottleneck.
Try in Chrome, specifically.
Chrome's V8 has a brilliant generational garbage collector, where three types of polling happens: There are three threads constantly polling the three generation types, and I think they run at 10, 50 and 200 millisecond intervals (I may have got the figures wrong, but they are principally similar, with the time intervals increasing for older generations).
This is aggressive, and ensures that memory usage remains low.
In spite of this, if your code is hogging memory in Chrome, then you can be sure that the issue is with the code. It could be that:
(a) Your code is really unoptimized, or
(b) It is really working on very large data that is probably not best suited for the client (e.g. an excessively heavy page that has tons of widgets, dom nodes etc.)
Care to post some snippets?
Canvas clearing gets vastly different perfomance on different browsers. See http://jsperf.com/canvas-clearing2 .
I need to clear a canvas every frame and how I do it has a huge impact on mobile safari vs Desktop safari performance. Desktop Safari likes canvas.width = width but mobile safari prefers canvas.drawRect() .
Is there a way to detect what browser is what an run different JavaScript based on it? I would prefer to do this through JavaScript rather than server side.
Also, I've found that jQuery's $.browser doesn't help because it doesn't distinguish between mobile safari and desktop safari. the navigator object has similar problems.
Targeting specific browsers is always a bit of a problem. While there are certainly ways to do it, it's not a particularly maintainable way to do it because there are lots of different browsers and versions of each browser and those browsers change over time, thus which browsers are optimized by which code can be changing all the time. This creates quite a maintenance headache. For example, mobile Safari on an iPod Touch has very different performance characteristics than mobile Safari on each different generation of iPhone or iPad.
So ... instead of trying to detect the type of browser, it's much better to either do feature detection or performance detection and dynamically adjust based on how any given browser reacts. Done right, this can work equally well for all browsers, even browsers you've never seen or that aren't even released yet.
In your case, you could devise a quick performance test that tests the performance of each of your two methods. If there's really a big performance difference between the two methods, then you could probably tell the difference in a matter of a few hundred milliseconds, set a cookie on the local browser indicating the method that works best and then just use the preferred method in that browser from then on. If you wanted to, you could let the cookie expire every few months (so it would get retested every once in a while) or you could put the exact browser version into the cookie too and reruns the tests and set a new cookie if the browser version every changed (software upgrades).
In this way, your code would always be using the fastest version of your code in all browsers, now and forever without you ever having to maintain/test zillions of browsers to know which should be used for each.
I'm with jfriend00 on this if you're looking at longer-lasting, closer-to-sure-proof solutions. However, you can still pull quite a bit of information with certain functions in Javascript and use that to your advantage.
Check this out:
http://notnotmobile.appspot.com/
Other Resources
Navigator Object: http://www.w3schools.com/js/js_browser.asp
Browser Detect: http://www.quirksmode.org/js/detect.html
I am going to play a devil's advocate for a moment. I have been always wondering why browser detection (as opposed to feature detection) is considered to be a flat out as a bad practise. If I test a certain version of certain browser and confirm that, that certain functionality behaves is in some predictable way then it seems OK to decide to do special case it. The reasoning is that it will be in future foolproof, because this partial browser version is not going to change. On the other hand, if I detect that a DOM element has a function X, it does not necessarily mean that:
This function works the same way in all browsers, and
More crucially, it will work the same way even in all future browsers.
I just peeked into the jQuery source and they do feature detection by inserting a carefully constructed snippet of HTML into DOM and then they check it for certain features. It’s a sensible and solid way, but i would say that it would be a bit too heavy if i just did something like this in my little piece of personal JavaScript (without jQuery). They also have the advantage of practically infinite QA resources. On the other hand, what you often see people doing is that they check for the existence of function X, and then based on this, they assume the function will behave certain way in all browsers which have this function.
I’m not saying anything in the sense that feature detection is not a good thing (if used correctly), but I wonder why browser detection is usually immediately dismissed even if it sounds logical. I wonder whether it is another trendy thing to say.
It seems to me browser detection has been widely frowned upon since this post by Resig a couple of years ago. Resig's comments however were specific to libraries/framework code, i.e. code that will be consumed by other [domain-specific] applications/sites.
I think feature detection is without question a good fit for libraries/frameworks. For domain-specific applications however I'm not so sure browser detection is that bad. It's suitable for working around known browser characteristics that are difficult to feature-detect, or for browsers that have bugs in their implementation of the feature itself. Times that browser detection is appropriate:
sites/applications that are not cross-browser and need to show a warning/dialog/DifferentPage tailoring to that client's browser. This is common in legacy applications.
Banks or private sites with strict policies on what browsers and versions are supported (to avoid known security exploits that may compromise user's data)
micro-optimizations: occasionally one browser is ridiculously faster than the others when performing some operation a certain way. It can be advantageous depending on your user base to branch on that particular browser/version.
Lack of png transparency in IE6
many display/rendering issues (read: IE css support) that are only witnessed in specific browser versions and you don't actually know what feature to test for.
That said, there are some major pitfalls (probably committed by most of us) to avoid when doing browser detection.
Here's a good article explaining how feature detection is superior in so many ways to browser sniffing.
The truth is that sniffing is extremely fragile. It's fragile in theory, as it relies on an arbitrary userAgent string and then practically maps that string to a certain behavior. It's also fragile in practice, as time has shown. Testing every major and minor version of dozens of browsers and trying to parse build numbers of some of those versions is not practical at all; Testing certain behavior for quirks, on the other hand, is much more robust. Feature tests, for example, often catch bugs and inconsistencies that browser vendors incidentally copy from each other.
From my own experience, fixing Prototype.js in IE8, I know that 90% of the problems could have been avoided if we didn't sniff in the first place.
While fixing Prototype.js I discovered that some of the features that need to be tested are actually very common among JS libraries, so I made a little collection of common feature tests for anyone willing to get rid of sniffing.
The ideal solution would be to have a combination of both feature and browser detection. The former falls down because of the points you mentioned and the latter because sometimes browsers publish false information to "make things work" just so.
Mozilla has a great Browser Detection Primer that might be helpful to you as well.
From wikipedia
"At various points in its history, use of the Web has been dominated by one browser to the extent that many websites are designed to work only with that particular browser, rather than according to standards from bodies such as the W3C and IETF. Such sites often include "browser sniffing" code, which alters the information sent out depending on the User-Agent string received. This can mean that less popular browsers are not sent complex content, even though they might be able to deal with it correctly, or in extreme cases refused all content. Thus various browsers "cloak" or "spoof" this string, in order to identify themselves as something else to such detection code; often, the browser's real identity is then included later in the string."
I've recently seen some information about avoiding coding to specific browsers an instead using feature/bug detection. It seems that John Resig, the creator of jQuery is a big fan of feature/bug detection (he has an excellent talk that includes a discussion of this on YUI Theater). I'm curious if people are finding that this approach makes sense in practice? What if the bug no longer exists in the current version of the browser (It's an IE6 problem but not 7 or 8)?
Object detection's greatest strength is that you only use the objects and features that are available to you by the client's browser. In other words given the following code:
if (document.getFoo) {
// always put getFoo in here
} else {
// browsers who don't support getFoo go here
}
allows you to cleanly separate out the browsers without naming names. The reason this is cool is because as long as a browser supports getFoo you don't have to worry which one it is. This means that you may actually support browsers you have never heard of. If you target user agent strings find the browser then you only can support the browsers you know.
Also if a browser that previously did not support getFoo gets with the program and releases a new version that does, you don't have to change your code at all to allow the new browser to take advantage of the better code.
What if the bug no longer exists in the current version of the browser (It's an IE6 problem but not 7 or 8)?
That's the whole point of feature/bug detection. If the browser updates its features or fixes a bug, then the feature/bug detecting code is already up to date. If you're doing browser sniffing on the other hand, you have to change your code every time the capabilities of a browser changes.
Version and User Agent parsing remind me of stereotyping or racial profiling. Object detection is the way to go in my opinion. The book jQuery in Action does a good job of pointing out the fine details of why.
Well, if it is a problem in IE 6, but not IE 7 or IE 8 then the feature/bug detection mechanism will mark it as not a problem and use the appropriate functions.
It makes sense in practice, browser sniffing causes many issues, what if you don't update your signatures in time for some new browser that is released? You are now locking out potential customers, and visitors because you believe that their browser is unable to support something that indeed it can.
So yes, it makes sense, and your second question is moot because of the fact that it does the detection in the first place, if it is no longer a bug everything will work as expected without the work around that is in place if the bug was there!