Force ASP.NET to generate JavaScript for all User Agents - javascript

I noticed recently in my ASP.NET web application that if I set my User Agent to an empty string (using a FireFox plug-in to spoof the user agent), then ASP.NET will not generate the javascript required to perform postbacks. More specifically, if you try calling the __doPostBack(a, b) function from your javascript, you will get an error saying that function is undefined.
I understand that every browser has a user agent, so this won't come up that often, but the essence of the problem still exists: there are cases in which an unrecognized or malformed user agent can render your web application unusable if you rely on postbacks.
This is similar to this question: ASP.net not generating javascript for some User Agents, but if I'm reading it right it looks like you'd fix each unrecognized user agent case by case and mask it as another browser. My concern is less with an individual user agent, and more so with the overall fact that certain user agents won't be able to use my application and I won't know it because the error happens in javascript and not on the server.
Does anyone know of a way that I can force ASP.NET to always generate the required javascript?

If you set empty to the User Agent, or if you use an unknown User Agent string, then asp.net reads the Default.browser file from Browsers directories and there you can define how to you like to act on this cases.
The lines that you need to change and on this cases use javascript is that ones:
<capability name="jscriptversion" value="5.6" />
<capability name="javascript" value="true" />
<capability name="javascriptversion" value="1.5" />
on the default, the javascript line if false. I like to comment here that I am not so sure if you really need to change it, if some one spoof the user agent, let it do it. If he likes your site and need to use it, then is better let you know how to handle the browser that he use. From the other hand you must take care if this is possible, your site to work even with out javascript, at least for most of the actions you do.
Is better to avoid link buttons (that use the _doPostBack) and use post back buttons for example. I know that GridView and other use also the _doPostBack for paging... ok with that, but is better one good site to been able to work even with out javascript.
Some similar question:
__doPostBack is undefined on DotNetNuke website for IE 10
and this blog: Bug and Fix: ASP.NET fails to detect IE10 causing _doPostBack is undefined JavaScript error or maintain FF5 scrollbar position

Related

XSS vulnerability despite encoding

I had a security audit on a website on which I've been working. The audit has shown that one of my parameter, called backurl, wasn't protected enough in my jsp file. This url is put inside the href of a button, button that allows the user to get back to the previous page.
So what I did was to protect it using the owasp library, with the function "forHTMLAttribute". It gives something like this:
<a class="float_left button" href="${e:forHtmlAttribute(param.backUrl)}">Retour</a>
However, a second audit showed that by replacing the value of the parameter by:
javascript:eval(document%5b%27location%27%5d%5b%27hash%27%5d.substring(1))#alert(1234)
The javascript code would be executed and the alert would show, when clicking on the button only.
They said that something that I could do was to hardcode the hostname value in front of the url, but I don't really get how this would help solve the problem. I feel like no matter what I do, solving a XSS vulnerability will just create a new one.
Could someone help me on this? To understand what's happening and where to look at least.
Thanks a lot.
As #Pointy said, the problem is more fundamental here. Accepting untrusted input and rendering that as a link verbatim (or as text), is a security issue, even if you escape the heck out of it. For example, if you allow login?msg=Password+incorrect and that's how you deal with relaying messages - you have a problem. I can make a site with: Click for cute kittens! and you see why this is a problem.
The real solution is to not accept potentially tainted information, period (nevermind escaping!), if that information ends up being rendered without the surrounding context that it is tainted. For example, take twitter. The username and the tweet are potentially tainted. This is no big deal because users of twitter get clued in due to the very design of the website that you're looking at what some rando wrote. If someone tweets 'If you transfer 5 bucks to Jack Dorsey's account at 12345678, he'll give you a twitter blue logo!', there's a reasonable expectation that it's on the users of the site to not be morons and trust it.
Your website's "click here to go to the previous page" is not like that. You can't reasonably expect the users of your site to hang over that button, check their browser's status bar, and figure it out.
Hence, the entire principle is wrong. You simply can't do it this way, period.
Your alternatives are threefold:
Instead of letting the 'previous link' property be a URL param, it needs to be in the session. Websites work with sessions, generally. You can store whatever you want in them, serverside (the HTTP handler code either manually takes e.g. the cookie and uses that to look up on-server info for that user by doing a lookup, or runs on a framework that just provides an HttpSession-style object, which works in that exact fashion).
If you really want to bend over backwards, you can include it as a signed blob. This is creative but I really wouldn't go there.
A quick hack: What if you just include Click to go back as a static link in your web page?

executing javascript through url

I'm trying to set the value of a textarea on the following page by executing something similar the below javascript:
javascript:alert(document.getElementsByClassName('uiTextareaNoResize uiTextareaAutogrow _1rv DOMControl_placeholder')[0].value='blabla');
This works if I manually enter the code into the address bar on the target page, but I want to pass this argument through a link.. ie...
<a href="/nextpage.php?javascript:alert(document.getElementsByClassName('uiTextareaNoResize uiTextareaAutogrow _1rv DOMControl_placeholder')[0].value='blabla');"
Just wondered if anything like this is possible at all?
You can send the arguments via the url like you would for GET requests. Then have the receiving page parse location.search.
For instance, you can send it like this:
http://example.com/?arg1=foo&arg2=bar
And the receiving page have a script like this:
var queryString = location.search; //?arg1=foo&arg2=bar
You'll have to parse it manually though, removing the ?, split by & then each by =
This is called XSS or Cross-Site-Scripting, and as many comments have already pointed out, it is a security issue. This is why most major browsers do NOT allow it.
However, I believe that some browsers do allow it, for example Opera - although I can't recall exactly which version.
If you are looking to make your own "browser", I would recommend using C# .Net WebBrowser, then use the runtime package XULRunner (https://developer.mozilla.org/en-US/docs/Mozilla/Projects/XULRunner).
Despite all this, I would not recommend doing anything that may be against laws of your current location, or doing anything to displease the site owner.

jQuery on MTurk, why does Chrome report "Unsafe JavaScript attempt to access frame with URL"?

I'm doing a couple of things with jQuery in an MTurk HIT, and I'm guessing one of these is the culprit. I have no need to access the surrounding document from the iframe, so if I am, I'd like to know where that's happening and how to stop it!
Otherwise, MTurk may be doing something incorrect (they use the 5-character token & to separate URL arguments in the iframe URL, for example, so they DEFINITELY do incorrect things).
Here are the snippets that might be causing the problem. All of this is from within an iframe that's embedded in the MTurk HIT** (and related) page(s):
I'm embedding my JS in a $(window).load(). As I understand it, I need to use this instead of $(document).ready() because the latter won't wait for my iframe to load. Please correct me if I'm wrong.
I'm also running a RegExp.exec on window.location.href to extract the workerId.
I apologize in advance if this is a duplicate. Indeed - after writing this, SO seems to have a made a good guess at this: Debugging "unsafe javascript attempt to access frame with URL ... ". I'll answer this question if I figure it out before you do.
It'd be great to get a good high-level reference on where to learn about this kind of thing. It doesn't fit naturally into any topic that I know - maybe learn about cross-site scripting so I can avoid it?
** If you don't know, an MTurk HIT is the unit of work for folks doing tasks on MTurk. You can see what they look like pretty quick if you navigate to http://mturk.com and view a HIT.
I've traced the code to the following chunk run within jquery from the inject.js file:
try {
isHiddenIFrame = !isTopWindow && window.frameElement && window.frameElement.style.display === "none";
} catch(e) {}
I had a similar issue running jQuery in MechanicalTurk through Chrome.
The solution for me was to download the jQuery JS files I wanted, then upload them to the secure amazon S3 service.
Then, in my HIT, I called the .js files at their new home at https://s3.amazonaws.com.
Tips on how to make code 'secure' by chrome's standards are here:
http://developer.chrome.com/extensions/contentSecurityPolicy.html
This isn't a direct answer to your question, but our lab has been successful at circumventing (read hack) this problem by asking workers click on a button inside the iframe that opens a separate pop-up window. Within the pop-up window, you're free to use jQuery and any other standard JS resources you want without triggering any of AMT's security alarms. This method has the added benefit of allowing workers to view your task in a full-sized browser window instead of AMT's tiny embedded iframes.

flowplayer html validation

Ok, Im trying to rebuild a clients website thats long over due for cleanup on the backend and under the hood. This client uses flowplayer for most of the videos seen on any of there sites, and while attempting to validate my code via w3c validator I notice that the validator is throwing 2 errors both pertaining to flowplayer in this case.
http://validator.w3.org/check?uri=http%3A%2F%2Fv2.newyorkbarshow.com%2Fhome&charset=%28detect+automatically%29&doctype=Inline&group=0&user-agent=W3C_Validator%2F1.3
I am using the latest flowplayer out, and following there example (which I don't know if that validates either). So I am wondering if it doesn't validate out of the box, if anyone happens to know a means of correcting it so it will validate.
You need to add data attribute with value "/static/imgs/static/VidAd4BarShow.swf" in your object tag.
note : I see this happens before flow player executes anything within your source...

How to force update of design changes to clients using xPages?

I am bulding a webpage using xPages and I am making constant changes to script and design, this include both server and client javascript , stylesheet and image changes.
Each time I change a javascript or stylesheet I want to see my changes in the webbrowser and I also want my users to get the latest changes when they access the webpage.
I know I can use Shift-Reload, or CTRL-reload and clear my webbrowser cache. and I also know that I can change the objects expiration date, but I want a smoother and better controlled way to do this.
Looking for any kind of best practice for doing this.
Thanks
Thomas
In the xsp.properties file for the application or on the server for server wide use you can set xsp.application.forcefullrefresh=true. The xsp.properties file documentation says:
# Application refresh
# When this property is set to true, then a full application refresh is requested when
# the design of a class changes (means that all the data are discarded in scopes).
# xsp.application.forcefullrefresh=false
The new XSP Portable Command Guide says "This property was introduced in Notes/Domino 8.5.3. It is set to false by default and is particularly useful during the development phase of an XPages application."
I have not fully tested this behavior but it sounds promising. You could/should of course only set it to true WHILE you make the changes. once stable, set it back.
/Newbs
Adding to Ferry's answer and your comment;
Instead of "?dummy=randomvalue", you can use "?version=2.1". So it will be cached but when you change design, you can just increase the version.
There's a problem with this approach as some proxy servers won't cache anything with query params. Better to rename the file directly, adding date or version number to it. It will always work.
To disable caching temporarily use Fiddler2. It's easy to enable and disable in one place across any web client. As well as added benefits for the http request tracking features.
To fully disable any caching add url + '"?dummy=" + #Unique();' to every url to javascript or image files...
The way I am reading this question is that you want every change you make to appear immediately on the client's browser or client. Are you really sure you want to do this? It sounds like you are not doing any testing so any typos, bugs, crashes, etc will be passed on to your users. Sounds like a bad plan to me. I hope I am wrong and that you are using a template and pushing only your fully tested changes up to an production version instead of making the changes in the production version.
I would just put out a schedule of when changes are going to be pushed up to production and let the users reload their browser or client at that time. Either that or do it during off hours and when they next log on, they get the newest changes.
Adding to Ferry's answer and your comment;
Instead of "?dummy=randomvalue", you can use "?version=2.1". So it will be cached but when you change design, you can just increase the version.
maybe you could look at how domino can control caching of url's.
http://www.ibm.com/developerworks/lotus/library/ls-resp_head_rules/
NEwbs answer is a good one but it is useful to note that in Firefox there is a very useful plug in called "web developer" from Chris Pederick that allows you to disable the cache.
http://chrispederick.com/work/web-developer/
The other really useful one is Firebug which is just amazing - It makes any HTML work much easier
https://addons.mozilla.org/en-US/firefox/addon/firebug/
I did found another solution by putting my css and js in a theme it is easy to just rename the files.
as described here
http://goo.gl/vFTii
Why do not we use the window.location.reload()...
Which does the same like ctrl+F5
It reloads the page, which is similar to context.reloadpage

Categories