Fill out form on websites from within code - javascript

I'm working with websites that have forms on their pages. I need to fill out the form and then submit it, using Javascript.
The problem that I'm running into is that if I make a GET request in order to get the HTML of the page, then I don't have access to the JS running on that page and therefore, I can't actually submit the form (since the page is not connected to the server). How would I be able get around this? It could also be that some pages aren't running JS, but are running PHP scripts instead.

You need a headless browser in this case. Here's one for .NET, if you can code C#, otherwise there are plenty of others for different platforms and languages.

Related

Pentest pure JavaScript (qooxdoo) Website

I'm wondering how I could Pentest a website made completely in JavaScript, for example using the qooxdoo Framework.
Those websites do not contain any requests to the server which respond with HTML content. Only one Javascript file gets transmitted when loading the page (which is an almost empty html page with just the link to the javascript file) and the page is then beeing set up by the loaded JS file, without any line of HTML written by the developer.
Typically, there would be some Spidering/Crawling in most Web App Scanners (like Nexpose), which check a website for links and forms and crawl any link they find which directs to the same domain and test any parameter found on these links. I assume those scanners would not have any effect on a pure JS page.
Then there's the other possibility: A proxy server (like Burp Suite) which captures any traffic beeing sent to a server and is able to check any found parameters on this requests. This would probably work to test the API-Server which is located behind the Website (for example to find SQL injections).
But: Is there any way to test the client, for example for XSS (self or stored)?
Or more in general: What types of attacks would you typically need to check in such a pure JS web application? What tools could help with that?

Auto form filling using java script

My requirement is to write one script when I run the script it opens the page and fill the fields and automatically take me to next page.
For e.g. Script for www.irctc.co.in. When we login to irctc it ask the user name and password and when click on submit it redirect to next page.
I want to write a script in such a way that I just click on the script it internally does all these things and I could see the next page.
I am unable form where I should start.
I think you are looking for something like Greasemonkey: https://addons.mozilla.org/en-US/firefox/addon/greasemonkey/
Greasemonkey is a Mozilla Firefox extension that allows users to install scripts that make on-the-fly changes to web page content after or before the page is loaded in the browser.
If you use a different browser, then you can refer to: http://en.wikipedia.org/wiki/Greasemonkey#Equivalents_for_other_browsers
Check Watir - Web Application Testing in Ruby. Although it is used for automation, it might solve the purpose here. With Watir, you write scripts in ruby and execute it and then see the magic. More information can be found here

Pre-Fill html form using Javascript

I wanna know if it is possible to autofill an html form of a website (not local) using javascript, and if it is possible can you please put me in the right direction.
Edit : I have one mozilla extension that has some dropdowns, textareas ... from which i will get the data i want to put in the form.
Thanks.
JavaScript, running on a website, cannot cause a visitor's browser to go to another website and pre-fill a form there. This would be a serious security issue.
JavaScript running in a browser extension can, but the specifics depend on the specific type of browser extension. (i.e. Chrome Extensions and Greasemonkey extensions are different).
JavaScript running on a server (e.g. via Node.js) can go to another site and fill out a form there (e.g. with PhantomJS). It can't present the filled in form to the user without acting as a full proxy though.

How do you keep content from your previous web page after clicking a link?

I'm sorry if this is a newbie question but I don't really know what to search for either. How do you keep content from a previous page when navigating through a web site? For example, the right side Activity/Chat bar on facebook. It doesn't appear to refresh when going to different profiles; it's not an iframe and doesn't appear to be ajax (I could be wrong).
Thanks,
I believe what you're seeing in Facebook is not actual "page loads", but clever use of AJAX or AHAH.
So ... imagine you've got a web page. It contains links. Each of those links has a "hook" -- a chunk of JavaScript that gets executed when the link gets clicked.
If your browser doesn't support JavaScript, the link works as it normally would on an old-fashioned page, and loads another page.
But if JavaScript is turned on, then instead of navigating to an HREF, the code run by the hook causes a request to be placed to a different URL that spits out just the HTML that should be used to replace a DIV that's already showing somewhere on the page.
There's still a real link in the HTML just in case JS doesn't work, so the HTML you're seeing looks as it should. Try disabling JavaScript in your browser and see how Facebook works.
Live updates like this are all over the place in Web 2.0 applications, from Facebook to Google Docs to Workflowy to Basecamp, etc. The "better" tools provide the underlying HTML links where possible so that users without JavaScript can still get full use of the applications. (This is called Progressive Enhancement or Graceful degradation, depending on your perspective.) Of course, nobody would expect Google Docs to work without JavaScript.
In the case of a chat like Facebook, you must save the entire conversation on the server side (for example in a database). Then, when the user changes the page, you can restore the state of the conversation on the server side (with PHP) or by querying your server like you do for the chat (Javascript + AJAX).
This isn't done in Javascript. It needs to be done using your back-end scripting language.
In PHP, for example, you use Sessions. The variables set by server-side scripts can be maintained on the server and tied together (between multiple requests/hits) using a cookie.
One really helpful trick is to run HTTPFox in Firefox so you can actually monitor what's happening as you browse from one page to the next. You can check out the POST/Cookies/Response tabs and watch for which web methods are being called by the AJAX-like behaviors on the page. In doing this you can generally deduce how data is flowing to and from the pages, even though you don't have access to the server side code per se.
As for the answer to your specific question, there are too many approaches to list (cookies, server side persistence such as session or database writes, a simple form POST, VIEWSTATE in .net, etc..)
You can open your last closed web-page by pressing ctrl+shift+T . Now you can save content as you like. Example: if i closed a web-page related by document sharing and now i am on travel web page. Then i press ctrl+shift+T. Now automatic my last web-page will open. This function works on Mozilla, e explorer, opera and more. Hope this answer is helpful to you.

Enabling SEO on AJAX pages

I'm experimenting with building sites dynamically on the client side, through JavaScript + a JSON content server, the js retrieves the content, and builds the page client-side.
Now, the content won't be indexed by Google this way. Is there a workaround for this? Like having a crawler version and a user version? Or having some sort of static archives? Has anyone done this already?
You should always make sure that your site works without javascript. Make links that link to static versions of the content. Then add javascript click handlers to those links that block the default action from hapening and make the AJAX request. I.e. using jQuery:
HTML:
<a href='static_content.html' id='static_content'>Go to page!</a>
Javascript:
$('#static_content').click(function(e) {
e.preventDefault(); //stop browser from following link
//make AJAX request
});
That way the site is usable for crawlers and users without javascript. And has fancy AJAX for people with javascript.
If the site is meant to be indexed by google then the "information" you want searchable and public should be available without javascript. You can always add the dynamic stuff later after the page loads with javascript. This will not only make the page indexable but will also make the page faster loading.
On the other hand if the site is more of an application 'ala gmail' then you probably don't want google indexing it anyway.
You could utilize a server rendered version, and then replace it onload with the ajax version.
But if you are going to do that, why not build the entire site that way and just use ajax for interaction where the client supports it ala non-intrusive javascript.
You can use phantomjs to build a crawler version, see my solution here:
https://github.com/liuwenchao/ajax-seo

Categories