I've got a problem with CKEditor. Especially the br- and img-Tags are transformed to not be valid.
In the source-view I see and but when I inspect the rte-source the slashes are gone as well as when submitting the form.
Can someone tell me where I can disable this or enable XHTML-conformity? The embedding page is XHTML.
Thank to any hints, ideas or solutions.
To solve this problem describe above:
This wasn't a problem of CKEditor. I did use jQuery's html-method that relies on the DOM innerHTML property to get CKEditors contents. This property seems to be unable to handle XHTML at all (tested in FF and Chrome).
You need to replace all slashes manually.
Related
When we had done security audit of our project, we got broken Link "/a" vulnerability.
After searching for link throughout project we found link in JQuery-1.9.js java-script file that we are using in our project.
small part of code in that JQuery-1.9.js -
// Make sure that URLs aren't manipulated
// (IE normalizes it by default)
hrefNormalized: a.getAttribute("href") === "/a",
As per my understanding this code part helps for making it(JQuery) compatible with IE 6/7/8.
hrefNormalized is used to check that anchor tag is giving href value as full URL or exact href , which is issue in IE version.
The better explanation of this part is given in
https://www.inkling.com/read/jquery-cookbook-cody-lindley-1st/chapter-4/recipe-4-1
I want to remove this vulnerability but i don't want to remove or change code in JQuery js file.
So, My question is why did not JQuery designers used "/#" instead of "/a" .What is the problem of using "/#" in that code.
Earlier same question is asked by someone to JQuery Team,but they told that it not the problem from Jquery.
For reference of that ticket
http://bugs.jquery.com/ticket/10149
Help me to solve Or Is there another solution?
Thank you
This is not a vulnerability but a false positive. The security scanner interprets the "/a" string as a link, which it is not.
Even if jQuery creates the link in the DOM, it's not clickable or visible to the user. Your website does not actually have a real link to /a anywhere.
I would ignore the problem without changing anything.
Maybe, if you want this hrefNormalized: a.getAttribute("href") === "/a", to be transformed into this hrefNormalized: a.getAttribute("href") === "/#", but you don't want to touch the jQuery file.
Put that second one in a script in a an order so that the browser reads your script after reading the jQuery file, so it mashes.
Anyway, I never had issues with jQuery before, check your code first.
If you don't want to have your views with scripts, put it in a js file and link this file to your view after the jQuery file.
Hope it helped you, or at least gave you some ideas to solve your problem. Good luck, let us know how it goes! ;)
EDIT:
<script src="~/JQuery/jquery-2.0.3.js"></script>
<script src="~/Scripts/Fix.js"></script>
If you do something like this, the browser reads first the jQuery, then it reads the Fix.js. Inside the Fix.js, you put the function or paramater you want to change from the jQuery.
So the Browser will get the latest one it reads if they are equal.
For example:
function whatever (){ //This in jQuery file
//things #1
}
function whatever (){ //This in Fix file
//Different things #2
}
This way the browser chooses the Fix.js one, because was the last he read.
I have a module/widget that deliberatly calls stopParser: true to prevent the dojo parser from parsing its content. This works fine, as long as the widget is directly in an HTML page. As soon as I use the widget inside the template of another widget, stopParserdoes not work anymore. I recon it's a bug, since the same problem exists with dijit/layout/ContentPane and the parserOnLoadproperty. Anyone has an explanation or a tip how to solve this?
BR,
Daniel
My co-worker and I have spent about an hour on this now and we can't figure it out. This works fine for us in Chrome and Firefox. This is basically a dumbed down version of it:
http://jsbin.com/osebuc
It works fine in this test case, but in IE8 on our real thing it's appending the HTML. We literally just have $('.panda').html(someHtml) in the code, but in IE, instead of replacing the HTML it appends the HTML each time.
We also tried $('.panda').empty().html(someHtml) in IE, but then IE seems to "lose track" of .panda and doing a console.log($('.panda').length) returns 1, then another button click (back to the original HTML), returns 0.
Has anyone else seen this? Anyone have any ideas why this would happen?
Why:
My co-worker and I are trying to make some forms look prettier (beta form) without touching the original HTML (we can't) but have a way to go back to the original form (classic) if they click a button. To do this we cache the original HTML in a var, then we build the new, beta HTML, save it to a var, and then do the example above in a toggle.
I just had the very same issue: .html() in IE7 / IE8 was appending.
I managed to sort it by doing the following:
$('panda').html("").html(someHtml);
Basically, manually set it to blank, and then add the code from the ajax call.
Although, I persnally prefer jQuery, have you tried plain-old javascript?
$('.panda')[0].innerHTML = "";
$('.panda')[0].innerHTML = "some html";
had the same problem. Be sure your HTML is valid. I had a different open and closing tag ^^
I am developing Javascript app that will wrap every line of text entered inside iframe (designmode) with P (or div) like it happens by default in IE.
For now I am not pasting my code because I just started, the first problem is when i type some text in firefox and even before I click enter or calling any function firebug inserts
<br _moz_dirty="">
under the entered text.
Why? How can I prevent it?
If you still need my code please tell.
As the _moz_-prefix suggests, this is a Mozilla-internal property. It isn't inserted by Firebug, but rather by the core editor functionality in Gecko. You can't prevent it; ignore it or work around it.
#myEditableDiv br {display:none;}
It's something Mozilla uses to prevent empty containers collapsing and occasionally inserts at seemingly random times too.
The question is, if they knew it was a dirty hack then why did they do it?
The _moz_dirty attribute is used to indicate that the node needs to be pretty-printed when the document is saved, although it shouldn't appear in web pages, only in SeaMonkey Composer and SeaMonkey and Thunderbird's HTML Message Compose.
The Gecko editor used to put it there because it needed it to give it somewhere to put the cursor. I believe this is fixed in Firefox 4.
When using HTML custom attributes it doesn't works in Chrome.
What I mean is, suppose I have this HTML:
<div id="my_div" my_attr="1"></div>
If I try to get this attribute with JavaScript in Chrome, I get undefined
alert( document.getElementById( "my_div" ).my_attr );
In IE it works just fine.
Retrieving it via getAttribute():
alert(document.getElementById( "my_div" ).getAttribute("my_attr"));
Works fine for me across IE, FF and Chrome.
IE is about the only browser I've seen that honor attributes that do not conform to the HTML DTD schema.
http://www.webdeveloper.com/forum/showthread.php?t=79429
However, if you're willing to write a custom DTD, you can get this to work.
This is a good article for getting started down that direction:
http://www.alistapart.com/articles/scripttriggers/
Got same problem for Safari and using getAttribute(..) made the magic. It looks like cross browser compatible. Here is nice article http://www.javascriptkit.com/dhtmltutors/customattributes.shtml
Are you declaring your page as XHTML compliant? You can't add new attributes to elements willy-nilly if you do. My understanding is that there are ways (after all, ASP.NET succeeds at it), but you have to emit all kinds of gunk (custom schema?). I'm not familiar with the details.