Can I prevent user pasting Javascript into Design Mode IFrame? - javascript

I'm building a webapp that contains an IFrame in design mode so my user's can "tart" their content up and paste in content to be displayed on their page. Like the WYSIWYG editor on most blog engines or forums.
I'm trying to think of all potential security holes I need to plug, one of which is a user pasting in Javascript:
<script type="text/javascript">
// Do some nasty stuff
</script>
Now I know I can strip this out at the server end, before saving it and/or serving it back, but I'm worried about the possibility of someone being able to paste some script in and run it there and then, without even sending it back to the server for processing.
Am I worrying over nothing?
Any advice would be great, couldn't find much searching Google.
Anthony

...I'm worried about the possibility of someone being able to paste some script in and run it there and then, without even sending it back to the server for processing.
Am I worrying over nothing?
Firefox has a plug-in called Greasemonkey that allows users to arbitrarily run JavaScript against any page that loads into their browser, and there is nothing you can do about it. Firebug allows you to modify web pages as well as run arbitrary JavaScript.
AFAIK, you really only need to worry once it gets to your server, and then potentially hits other users.

As Jason said, I would focus more on cleaning the data on the server side. You don't really have any real control on the client side unless you're using Silverlight / Flex and even then you'd need to check the server.
That said, Here are some tips from "A List Apart" you may find helpful regarding server side data cleaning.
http://www.alistapart.com/articles/secureyourcode

Related

Why do i get these hyperlinks on my website? Security Flaw?

In the image below is where I recently found these malicious hyperlinks.
I tried to log into my web-host and I couldn't find any hyperlinks attached to the elements in my files.
My Questions:
How do I avoid these?
How can I remove them?
Despite these hyperlinks, Is my website vulnerable to any XSS attacks? If yes, please specify the holes i should fill.
I am using Ajax to send an instant response if the email already exists or not; Would this influence the attacker to easily send XMLHTTPRequests to the server?
I just want to make my website 100% safe as in a matter of none would ever get into the database ( confidentiality, integrity, and availability ) considering I have SSL certificate over HTTPS. Even if it's only login system website without many complicated input stuff.
I heard using SQL stored procedures help, also HTML encoding.
Please visit the website and take a look over the code
www.tarsh.tk
Any Help/Hints/Tips/Links would be appreciated.
The site at www.tarsh.tk does not have any hyperlinks for me see http://picpaste.com/Screen_Shot_2016-03-20_at_11.29.02_PM-F7OsKLUZ.png.
Maybe it isn't the site and it is your browser. Have you tried a different browser?
I used Chrome 49 and Safari 9, both are rendering the site without hyperlinks.

emulate human using open source browser

I want to emulate some human(not for spam use) for some my daily works. I don't want to use some curl/wget solution because it will include some works to analyze the HTTP package sent by browser. So, basically:
I will use the browser(eg. chrome) to login the system, so I have login state in that browser.
I open the search page for the system, and by any way I start the script.
The script will input some string into the input, and submit the form.
The browser redirects to the result page, and my script will analyze the result page and get the things I want.
Is there any existing solutions that I can use to write such script? The functionality should include:
fill in some and submit the form
in the result page the script can analyze the webpage and get the desired data.
I was trying to do it in but the domain for my program and the system that I want to run in is not in the same domain.
You may be interested in famous Selenium:
Selenium automates browsers. That's it. What you do with that power is
entirely up to you. Primarily it is for automating web applications
for testing purposes, but is certainly not limited to just that.
Boring web-based administration tasks can (and should!) also be
automated as well.
See its DEMO here.
Or you may be interested in iMacros addon of Firefox depending on your requirements.

How do you keep content from your previous web page after clicking a link?

I'm sorry if this is a newbie question but I don't really know what to search for either. How do you keep content from a previous page when navigating through a web site? For example, the right side Activity/Chat bar on facebook. It doesn't appear to refresh when going to different profiles; it's not an iframe and doesn't appear to be ajax (I could be wrong).
Thanks,
I believe what you're seeing in Facebook is not actual "page loads", but clever use of AJAX or AHAH.
So ... imagine you've got a web page. It contains links. Each of those links has a "hook" -- a chunk of JavaScript that gets executed when the link gets clicked.
If your browser doesn't support JavaScript, the link works as it normally would on an old-fashioned page, and loads another page.
But if JavaScript is turned on, then instead of navigating to an HREF, the code run by the hook causes a request to be placed to a different URL that spits out just the HTML that should be used to replace a DIV that's already showing somewhere on the page.
There's still a real link in the HTML just in case JS doesn't work, so the HTML you're seeing looks as it should. Try disabling JavaScript in your browser and see how Facebook works.
Live updates like this are all over the place in Web 2.0 applications, from Facebook to Google Docs to Workflowy to Basecamp, etc. The "better" tools provide the underlying HTML links where possible so that users without JavaScript can still get full use of the applications. (This is called Progressive Enhancement or Graceful degradation, depending on your perspective.) Of course, nobody would expect Google Docs to work without JavaScript.
In the case of a chat like Facebook, you must save the entire conversation on the server side (for example in a database). Then, when the user changes the page, you can restore the state of the conversation on the server side (with PHP) or by querying your server like you do for the chat (Javascript + AJAX).
This isn't done in Javascript. It needs to be done using your back-end scripting language.
In PHP, for example, you use Sessions. The variables set by server-side scripts can be maintained on the server and tied together (between multiple requests/hits) using a cookie.
One really helpful trick is to run HTTPFox in Firefox so you can actually monitor what's happening as you browse from one page to the next. You can check out the POST/Cookies/Response tabs and watch for which web methods are being called by the AJAX-like behaviors on the page. In doing this you can generally deduce how data is flowing to and from the pages, even though you don't have access to the server side code per se.
As for the answer to your specific question, there are too many approaches to list (cookies, server side persistence such as session or database writes, a simple form POST, VIEWSTATE in .net, etc..)
You can open your last closed web-page by pressing ctrl+shift+T . Now you can save content as you like. Example: if i closed a web-page related by document sharing and now i am on travel web page. Then i press ctrl+shift+T. Now automatic my last web-page will open. This function works on Mozilla, e explorer, opera and more. Hope this answer is helpful to you.

web crawler/spider to fetch ajax based link

I want to create a web crawler/spider to iteratively fetch all the links in the webpage including javascript-based links (ajax), catalog all of the Objects on the page, build and maintain a site hierarchy. My question is:
Which language/technology should be better (to fetch javascript-based links)?
Is there any open source tools there?
Thanks
Brajesh
You can automate the browser. For example, have a look at http://watir.com/
Fetching ajax links is something that even the search-giants haven't accomplished yet. It is because, the ajax links are dynamic and the command and response both vary greatly as per the user's actions. That's probably why, SEF-AJAX (Search Engine Friendly AJAX) is now being developed. It is a technique that makes a website completely indexable to search engines that when visited by a web browser, acts as a web application. For reference, you may check this link: http://nixova.com
No offence but I dont see any way of tracking ajax links. That's where my knowledge ends. :)
you can do it with php, simple_html_dom and java. let the php crawler copy the pages on your local machine or webserver, open it with an java application (jpane or something) mark all text as focused and grab it. send it to your database or where you want to store it. track all a tags or tags with an onclick or mouseover attribute. check what happens when you call it again. if the source html (the document returned from server) size or md5 hash is different you know its an effective link and can grab it. i hope you can understand my bad english :D

Track HTML submissions when Javascript is being used

Conventional HTML lets you submit data via forms to a website and the destination (target) is usually visible in the sourcecode. Now, with JS it's a bit more difficult. You have to go through the JS files and find out what happens and where all the data is being sent to.
Is there an easier way to pin down where my data ends up (i.e. what file/address it is being sent to)??
Something like: I click on the button and it shows me what type of action was used and the destination URL?!
I'm on Ubuntu.
Are you trying to prevent phishing attacks or merely capture redirects/ajax calls on button clicks .. what's the purpose of this experiment?
Because Firebug (a firefox add-on) or Chrome Inspector (native to Chrome, not quite as robust as Firebug) will do the things that you're asking for, and give you lots more information to boot.
Additionally, you might consider setting up a local proxy on your machine and capturing all http traffic. It really depends on what you're trying to accomplish.
$("form").live("submit", function(){
alert("The page will be going to: "+this.action);
});
Of course, you will need to know if the window is being unloaded, because the submit could be handled and stopped by the javascript...

Categories