I'm making a jsp web based application some of the buttons and labels are not visible with internet explore but it is good with google chrome.How do i set application compatiblity so that user can see it in very good in handling with all browser
The browser compatibility is not dependent on the Java/JSP code, but on the HTML/CSS/JS code it generates. You, as being a web developer, should usually have full control over this. If you use a HTML strict doctype and write HTML code according the w3 standards (i.e. the page passes at least the w3 validator) and you use jQuery for writing JavaScript functions, then you have basically nothing to worry about.
Left behind CSS, this may be a pain when you use a wrong HTML doctype which pushes MSIE in the so-called "quirks mode" (which reveals the MSIE box model bug). So you'd like to get at least the doctype straight first (to start, use the HTML5 <!DOCTYPE html>). This should solve most of MSIE CSS issues and you can then fix the remaining CSS issues individually. More than often it concerns IE6/7 CSS specific bugs which boils down to the hasLayout bug. It's impossible to elaborate all those bugs in detail in a single answer. For that you can better ask a separate and specific question here whenever you stucks with fixing an individual CSS issue.
Related
I have created lots of buttons for a large number of pages (usually 5-10 in a row at the bottom of each page inside a table cell) using input type="button" name="..." value="..." onclick="some javascript event handler" etc, basically to link to other pages of the same group. All these pages are ultimately linked from an iframe tag on a single page. The buttons are working fine offline on my PC at least. But, now I've suddenly realized that I haven't used any 'form' tag for these buttons.
So my question is, is this 'form' tag really necessary? Will there by any problem after I upload? I would prefer not to add the form tag now to so many pages if it's not really necessary, because that's going to be a real drag. But, I don't want to suffer afterwards either.
No it is not necessary as long as you are not doing any Get/Post and grouping form elements together. They should work completely fine without a form tag.
There are two issues to be concerened with:
Is it valid HTML?
Turns out that it is valid HTML (see Is <input> well formed without a <form>?), so you are on the safe side here.
Will all common browsers accept it?
After googling around I haven't seen any information on problems wih this use of Input tags. That suggests that all common browsers accept this valid HTML (as they should). When developing any website that is accessible to the general public I would always do a manual cross-browser check to discover any abnormalities certain browsers may habe with my site.
Problem is that you most likely won't be able to tell from looking at your server logs whether certain browsers have a problem with your HTML. It may just not work on IE 6 and you would never be able to find unless a disgruntled customer calls up and informs you.
If this is a generally accessible website get some stats on the most commonly used browsers, decide which ones you want to support, and verify that the website is behaving properly on thiee browsers. This is a pain in the ass, but there is no way I know of to get around that. Browsers may not react to valid HTML properly.
As a rule of thumb, Firefox, Chrome, and Safari unsually behave well, and because of auto-updates most people will have a very recent version installed. If the latest version of the browser works I wouldn't be too concerned that users with some older version will have trouble.
The real test is always Internet Explorer. While version 8 and 9 are pretty standards-compliant, IE 7 certainly needs checking. IE 6 is the worst offender for unusual behavior. It was introduced in 2002, but today still 6% of the population use it! Most of this comes from cracked copies of Windows XP in China, but it is also used quite a lot in corporate networks, where OS and browsers are centrally deployed, and administrators have not managed to progress since early 2000.
In conclusion: Your code is unusual but ok, test it manually on the browsers you expect.
When I print content using javascript, the browser automatically adds header and footer (url/date/pagenr). Currently there seems to be no way to suppress this from the webapp-side.
Css3 might eventually be a solution for it (e.g. with #page, #top-left styles), but currently doesnt seem to work here (winvista chrome 17.0.942.0 / firefox 9.0). When is it supposed to come to the browsers?
Another solution might arise with chrome-browser: With above version the printdlg is not the modal system-printdlg but rendered within the website (there is also a checkbox to disable header+footer). Now that chrome has remade the printdlg, chrome might also provide a api to control printing using javascript?
Are there other solutions on the horizont? It cant be the final state, that to print from the browser with full control pdf- or other plugins are needed.
Do this:
#page {
margin: 0;
}
Done!
Currently, Javascript is very limited to accessing resources "outside the browser" like hardware and the file system for security reasons. Hinting from this trend, I doubt programmatically controlling how prints come out will be in Javascript's future. I say this because having those headers and footers (despite how ugly they are) should still ultimately be the user's decision.
Even with CSS3, you are still talking about reaching content outside of the HTML document itself. Those headers and footers are set by internal browser functions. However, Chrome does create an easy UI to get rid of them when printing.
However, especially with Chrome, there is a lot of power in their extensions, especially if you use NDAPI plugins (though this just poses another security risk). This route is very technical, but could probably be "another solution."
I'm a little curious about how the editing of Google Docs works. How did they implement an editor within the DOM? It does not really looks like a form with a textarea, but rather a normal document body with an additional cursor. I guess it is some javascript technique behind.
Is there any free library that I can use for achieving this kind of functionality, or how can I implement it myself?
2019 Update
I'm pretty certain the answer below was accurate at time of writing in 2010 but has been substantially inaccurate for several years. Here's an answer of mine to a similar question in 2012 that may be more accurate, although still possibly not massively helpful.
How does Google Docs achieve content editing?
Original answer
It uses editing functionality built into all modern desktop browsers, accessed at document level via setting the designMode property of the document to "on", or at an element level by adding a contenteditable attribute with value "true" (also triggered by setting the contentEditable property of an element to the string "true").
A very common strategy for editing a piece of content in a web page is to include an iframe whose document has designMode turned on.
Coupled with this is document.execCommand, which in an editable document can be used to apply various kinds of formatting. For example:
document.execCommand("bold", false, null);
... will toggle whether the selected text is bold. There is a pool of common commands between browsers but there are some discrepancies as to exactly how some are implemented. More info can be found here at MSDN for IE and here at MDC for Firefox. Can't seem to find documentation for WebKit.
Since mid-2010 Google Docs seems to completely having switched away from relying on the browser for editing mode.
Instead they built their own text/HTML editor using JavaScript and DOM.
They explain it in a lengthy blog posting on how they implemented the functions.
Having searched for 3rd-party vendors offering similar concepts, I found no one so far. Would have been a great for iOS since they seem to not support the contentEditable attribute until iOS 5 (and even then, there are issues)
For mee it looks like any HTML editor. They just coded their own JavaScript HTML editor. Even the HTML edit view doesn't have any magic.
A good and free HTML editor is TinyMCE but there are many others out there, even some very powerfull proprietary like CuteEditor which is available for PHP and ASP.NET.
BTW: The content of the document (in Google Docs) is placed in an iframe, just as it is in CuteEditor (and probably also in TinyMCE).
As other people have said, google is very uptight about what information they share. However, they have posted a lengthy blog idea about how they built their own word processing system FROM SCRATCH. Building this, would require you to have your own experience team with several days needed to complete it.
Link to lengthy blog is here:
https://drive.googleblog.com/2010/05/whats-different-about-new-google-docs.html
Essentially, they capture where your cursor is, place a div that looks like a line, and manually insert a letter at the place your cursor is
I'm learning jQuery and am about to write some pages using intensively that library. I just learned that some user disable JavaScript on their browser (I didn't even know that was possible and/or necessary).
Now, here's my question: What happens to my web application if a user disable JavaScript? For instance, I'd like to display some screens using AJAX and commands such as 'InsertBefore' to bring in live a DIV that will display the result.
So, if JavaScript is disabled, I wonder what going to happen to all this work that relies on JavaScript?
I'm kind of lost.
Thanks for helping
You may want to start by reading on Progressive Enhancement and Unobtrusive JavaScript.
I would also suggest to investigate how popular rich web applications like GMail, Google Maps and others, handle these situations.
I just learned that some user disable javascript on their browser
I do. The "NoScript" plugin for FireFox does the trick.
So, if Javascript is disabled, I wonder what going to happen to all this work that relies on Javascript?
It won't be functional.
A good practice suggests designing a site not to rely on JavaScript for major functionality. At least, accessing its content (in read-mode) should be possible. JavaScipt should only add interface enhancements like Ajax techniques etc. But the fallback version should always work.
I feel really sad when I see a site which is completely broken without JavaScript. Why can't people use CSS to put elements in proper places? Why do they try to align elements with JavaScript even if there is no dynamics involved?
The same goes for Flash sites. Once in a while a land upon a "web-design-agency" site which makes picky comments about me not allowing JavaScript. When I do I only see a basic primitive site with a few menus and that's it. What was the point of using Flash when the work is so primitive it can be done with raw HTML and CSS in an hour? For me it's a sign of unprofessional work.
All what's done in JavaScript won't work. Some users disable it for security reasons, NoScript is an excellent example. You can try it yourself by removing the scripts from your page or installing the NoScript-plugin for Firefox.
As a rule of thumb:
Make the website working with only semantic HTML
add the CSS
add the JS
But the website should be (almost) fully functional in stage 1.
If you disable Javascript in Safari things like Lexulous in Facebook won't work properly, the mouse letter carry function doesn't work.
Does having (approx 100) HTML validation errors affect my page loading speeds? Currently the errors on my pages don't break the page in ANY browser, but I'd spend time and clear those anyhow if it would improve my page loading speed?
If not on desktops, how about mobile devices like iPhone or Android? (For example, N1 and Droid load pages much slower than iPhone although they both use Webkit engine.)
Edit: My focus here is speed optimization not cross-browser compatibility (which is already achieved). Google and other biggies seem to use invalid HTML for speed or compatibility of both?
Edit #2: I'm not in quirks mode, i.e. I use XHTML Strict Doctype and my source looks great and its mostly valid, but 100% valid HTML usually requires design (or some other kind of) sacrifice.
Thanks
It doesn't affect -loading- speed. Bad data is transferred over the wires just as fast as good data.
It does affect rendering speed though (...in some cases... ...positively! Yeah, MSIE tends to be abysmally slow in standards mode) In most cases though, render speed will be somewhat slower due to Quirks mode which is less efficient, more paranoid and generally instead of just executing your data like a well-written program, it tries its best to fish out some meaningful content from what is essentially a tag soup.
Some validation errors like missing ALT or no / at the end of single-element tags won't affect render at all, but some, like missing a closing tag or using antiquated obsolete parameters may impact performance seriously.
It might affect loading speed, or it might not. It depends on the kind of errors you're getting.
I'd say that in most cases it's likely that it will be slower because the browser will have to handle these errors. For instance if you forgot to close a div tag, some browsers will close it for you. This takes processing time and increase the loading time.
I don't think the time delta between no error and 100 errors would be minimal. But if you have that many errors, you should consider fixing your code :)
Probably yes, and here's why.
If your code is valid to the W3C doctype you are using then the browser doesn't have to put more effort in to try and fix your code. This is called quirks mode, and it would be logical that if your code were to validate, the browser wouldn't have to try and piece the website back together.
Remembering it's always beneficial to make your code validate, if only to ensure a consistent design across the popular browsers. Finally you'll probably find that you fix the first few errors and your list of 100 errors will drastically decrease.
In theory, yes it will decrease page load times because the browser has to do less to handle errors and so on.
However it does depend on the nature of the validation errors. If you're improperly nesting tags (which actually may be valid in HTML4) then the browser would have to do a little more work working out where elements start and end. And this is the kind of thing that can cause cross-browser problems.
If you're simply using unofficial attributes (say, the target attribute on links) then support for that is either built into the browser or not. If the browser understands it, it will do something with it, otherwise it will ignore the attribute.
One thing that will ramp up your validation errors is using <br> under XHTML or <br /> under HTML. Neither should increase loading times (although <br /> takes a fraction longer to download).