How to combine scrapy and htmlunit to crawl urls with javascript - javascript

I'm working on Scrapy to crawl pages,however,I can't handle the pages with javascript.
People suggest me to use htmlunit, so I got it installed,but I don't know how to use it at all.Dose anyone can give an example(scrapy + htmlunit) for me? Thanks very much.

To handle the pages with javascript you can use Webkit or Selenium.
Here some snippets from snippets.scrapy.org:
Rendered/interactive javascript with gtk/webkit/jswebkit
Rendered Javascript Crawler With Scrapy and Selenium RC

Here is a working example using selenium and phantomjs headless webdriver in a download handler middleware.
class JsDownload(object):
#check_spider_middleware
def process_request(self, request, spider):
driver = webdriver.PhantomJS(executable_path='D:\phantomjs.exe')
driver.get(request.url)
return HtmlResponse(request.url, encoding='utf-8', body=driver.page_source.encode('utf-8'))
I wanted to ability to tell different spiders which middleware to use so I implemented this wrapper:
def check_spider_middleware(method):
#functools.wraps(method)
def wrapper(self, request, spider):
msg = '%%s %s middleware step' % (self.__class__.__name__,)
if self.__class__ in spider.middleware:
spider.log(msg % 'executing', level=log.DEBUG)
return method(self, request, spider)
else:
spider.log(msg % 'skipping', level=log.DEBUG)
return None
return wrapper
settings.py:
DOWNLOADER_MIDDLEWARES = {'MyProj.middleware.MiddleWareModule.MiddleWareClass': 500}
for wrapper to work all spiders must have at minimum:
middleware = set([])
to include a middleware:
middleware = set([MyProj.middleware.ModuleName.ClassName])
The main advantage to implementing it this way rather than in the spider is that you only end up making one request. In the solution at reclosedev's second link for example: The download handler processes the request and then hands off the response to the spider. The spider then makes a brand new request in it's parse_page function -- That's two requests for the same content.
Another example: https://github.com/scrapinghub/scrapyjs
Cheers!

Related

scrapy + selenium: <a> tag has no href, but content is loaded by javascript

I'm almost there with my first try of using scrapy, selenium to collect data from website with javascript loaded content.
Here is my code:
# -*- coding: utf-8 -*-
import scrapy
from selenium import webdriver
from scrapy.selector import Selector
from scrapy.http import Request
from selenium.webdriver.common.by import By
import time
class FreePlayersSpider(scrapy.Spider):
name = 'free_players'
allowed_domains = ['www.forge-db.com']
start_urls = ['https://www.forge-db.com/fr/fr11/players/?server=fr11']
driver = {}
def __init__(self):
self.driver = webdriver.Chrome('/home/alain/Documents/repository/web/foe-python/chromedriver')
self.driver.get('https://forge-db.com/fr/fr11/players/?server=fr11')
def start_requests(self):
for url in self.start_urls:
yield scrapy.Request(url=url, callback=self.parse)
def parse(self, response):
#time.sleep(1)
sel = Selector(text = self.driver.page_source)
players = sel.xpath('.//table/tbody/tr')
for player in players:
joueur = player.xpath('.//td[3]/a/text()').get()
guilde = player.xpath('.//td[4]/a/text()').get()
yield {
'player' : joueur,
'guild' : guilde
}
next_page_btn = self.driver.find_element_by_xpath('//a[#class="paginate_button next"]')
if next_page_btn:
time.sleep(2)
next_page_btn.click()
yield scrapy.Request(url = self.start_urls, callback=self.parse)
# Close the selenium driver, so in fact it closes the testing browser
self.driver.quit()
def parse_players(self):
pass
I want to collect user names and their relative guild and output to a csv file.
For now my issue is to proceed to NEXT PAGE and to parse again the content loaded by javascript.
if i'm able to simulate click on NEXT tag, i'm not 100% sure that code will proceed all pages and i'm not able to parse the new content using the same function.
Any idea how could i solve this issue ?
thx.
Instead of using selenium, you should try recreate the request to update the table. If you look closely at the HTML under chrometools. You can see that the request is made with parameters and a response is sent back with the data in a nice structured format.
Please see here with regards to dynamic content in scrapy. As it explains the first step to think about is it necessary to recreate browser activity ? Or can I get the information I need from reverse engineering HTTP get requests. Sometimes the information is hidden with <script></script> tags and you can use some regex or some string methods to gain what you want. Rendering the page and then using browser activity should be thought of as a last step.
Now before I go into some background on reverse engineering the requests, this website you're trying to get information from requires only to reverse engineer the HTTP requests.
Reverse Engineering HTTP requests in Scrapy
Now in terms of the actual web itself we can use chrome devtools by right clicking inspect on a page. Clicking the network tab allows you to see all requests the browser makes to render the page. In this case you want to see what happens when you click next.
Image1: here
Here you can see all the requests made when you click next on the page. I always look for the biggest sized response as that'll most likely have your data.
Image2: here
Here you can see the request headers/params etc... things you need to make a proper HTTP request. We can see that the referring URL is actually getplayers.php with all the params to get the next page added on. If you scroll down you can see all the same parameters it sends to getplayers.php. Keep this in mind, sometimes we need to send headers, cookies and parameters.
Image3: here
Here is the preview of the data we would get back from the server if we make the correct request, it's a nice neat format which is great for scraping.
Now You could copy the headers and parameters, cookies here into scrapy, but after a bit of searching and it's always worth checking this first, if just by passing in an HTTP request with the url will you get the data you want then that is the simplest way.
In this case it's true and infact you get in a nice need format with all the data.
Code example
import scrapy
class TestSpider(scrapy.Spider):
name = 'test'
allowed_domains = ['forge-db.com']
def start_requests(self):
url = 'https://www.forge-db.com/fr/fr11/getPlayers.php?'
yield scrapy.Request(url=url)
def parse(self,response):
for row in response.json()['data']:
yield {'name':row[2],'guild':row[3] }
Settings
In settings.py, you need to set ROBOTSTXT_OBEY = False The site doesn't want you to access this data so we need to set it to false. Be careful, you could end getting banned from the server.
I would also suggest a couple of other settings to be respectful and cache the results so if you want to play around with this large dataset you don't hammer the server.
CONCURRENT_REQUESTS = 1
DOWNLOAD_DELAY = 3
HTTPCACHE_ENABLED = True
HTTPCACHE_DIR = 'httpcache'
Comments on the code
We make a request to https://www.forge-db.com/fr/fr11/getPlayers.php? and if you were to print the response you get all the data from the table, it's quite a lot... Now it looks like it's in json format so we use scrapy's new feature to handle json and convert into a python dictionary. response.json() be sure that you have uptodate scrapy to take advantage of this. Otherwise you could use the json library that python provides to do the same thing.
Now you have to look at the preview data abit here but the individual rows are within response.json()['data'][i] where i in the row of data. The name and guild are within response.json()['data'][i][2] and response.json()['data'][i][3]. So looping over every response.json()['data']and grabbing the name and guild.
If the data wasn't so structured as it is here and it needed modifying I would strongly urge you to use Items or ItemLoaders for creating the fields that you can then output the data. You can modifying the extracted data more easily with ItemLoaders and you can interact with duplicates items etc using a pipeline. These are just some thoughts for in the future, I almost never use yielding a dictionary for extracting data particularly large datasets.

Run local external python script in terminal from Vue application

hi I am a beginner to vue js and firebase. I am building a simple GUI for a network intrusion detection system using vue js. I have written a python script that allowed me to push the output from terminal to firebase. I am currently running the app in local host:8080
However in my vue app I want to run the local python script in terminal when I press the start button and kill the process when I press stop.
I read online that doing it require a server. Does anyone have a good
recommendation on how I can do it?
I hear that it may also be easier to just use 'child_process' from
node js but I can't use it in vue js.
What is the best / simplest way to implement it.
<template>
<div id="new-packet">
<h3>Start/Stop Analysis</h3>
<button type="submit" class="btn">Start</button>
<router-link to="/home" class="btn green">Stop</router-link>
</div>
</template>
<script>
export default {
name: 'new-packet',
data () {
return {}
}
}
</script>
You can use FLASK/DJANGO to setup restFul API for your python function as suggested by #mustafa but if you don't want to setup those frameworks then you can setup simple http server handler as -
# This class contains methods to handle our requests to different URIs in the app
class MyHandler(SimpleHTTPRequestHandler):
def do_HEAD(self):
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.end_headers()
# Check the URI of the request to serve the proper content.
def do_GET(self):
if "URLToTriggerGetRequestHandler" in self.path:
# If URI contains URLToTriggerGetRequestHandler, execute the python script that corresponds to it and get that data
# whatever we send to "respond" as an argument will be sent back to client
content = pythonScriptMethod()
self.respond(content) # we can retrieve response within this scope and then pass info to self.respond
else:
super(MyHandler, self).do_GET() #serves the static src file by default
def handle_http(self, data):
self.send_response(200)
# set the data type for the response header. In this case it will be json.
# setting these headers is important for the browser to know what to do with
# the response. Browsers can be very picky this way.
self.send_header('Content-type', 'application/json')
self.end_headers()
return bytes(data, 'UTF-8')
# store response for delivery back to client. This is good to do so
# the user has a way of knowing what the server's response was.
def respond(self, data):
response = self.handle_http(data)
self.wfile.write(response)
# This is the main method that will fire off the server.
if __name__ == '__main__':
server_class = HTTPServer
httpd = server_class((HOST_NAME, PORT_NUMBER), MyHandler)
print(time.asctime(), 'Server Starts - %s:%s' % (HOST_NAME, PORT_NUMBER))
try:
httpd.serve_forever()
except KeyboardInterrupt:
pass
httpd.server_close()
print(time.asctime(), 'Server Stops - %s:%s' % (HOST_NAME, PORT_NUMBER))
and then you can make call to the server from vue using AXIOS as -
new Vue({
el: '#app',
data () {
return {
info: null
}
},
mounted () {
axios
.get('/URLToTriggerGetRequestHandler')
.then(response => (this.info = response))
}
})
I don't think there is anyway you can directly call a python script from js or vue.
You need to install django or flask in local machine. Do a restful api like her https://flask-restful.readthedocs.io/en/latest/quickstart.html#a-minimal-api
Write your python inside
Call python code from vue js by using restful url.

Calling javascript function using python function in django

I have a website built using the django framework that takes in an input csv folder to do some data processing. I would like to use a html text box as a console log to let the users know that the data processing is underway. The data processing is done using a python function. It is possible for me to change/add text inputs into the text box at certain intervals using my python function?
Sorry if i am not specific enough with my question, still learning how to use these tools!
Edit - Thanks for all the help though, but I am still quite new at this and there are lots of things that I do not really understand. Here is an example of my python function, not sure if it helps
def query_result(request, job_id):
info_dict = request.session['info_dict']
machines = lt.trace_machine(inputFile.LOT.tolist())
return render(request, 'tools/result.html', {'dict': json.dumps(info_dict),
'job_id': job_id})
Actually my main objective is to let the user know that the data processing has started and that the site is working. I was thinking maybe I could display an output log in a html textbox to achieve this purpose.
No cannot do that because you already at server side therefor you cannot touch anything in html page.
You can have 2 ways to do that:
You can make a interval function to call to server and ask the progress and update the progress like you want at callback function.
You can open a socket connection in your server & browser to instantly update.
While it is impossible for the server (Django) to directly update the client (browser), you can you JavaScript to make the request, and Django can return a StreamingHttpResponse. As each part of the response is received, you can update the textbox using JavaScript.
Here is a sample with pseudo code
def process_csv_request(request):
csv_file = get_csv_file(requests)
return StreamingHttpResponse(process_file(csv_file))
def process_file(csv_file):
for row in csv_file:
yield progress
actual_processing(row)
return "Done"
Alternatively you could write the process to the db or some cache, and call an API that returns the progress repeatedly from the frontend
You can achieve this with websockets using Django Channels.
Here's a sample consumer:
class Consumer(WebsocketConsumer):
def connect(self):
self.group_name = self.scope['user']
print(self.group_name) # use this for debugging not sure what the scope returns
# Join group
async_to_sync(self.channel_layer.group_add)(
self.group_name,
self.channel_name
)
self.accept()
def disconnect(self, close_code):
# Leave group
async_to_sync(self.channel_layer.group_discard)(
self.group_name,
self.channel_name
)
def update_html(self, event):
status = event['status']
# Send message to WebSocket
self.send(text_data=json.dumps({
'status': status
}))
Running through the Channels 2.0 tutorial you will learn that by putting some javascript on your page, each time it loads it will connect you to a websocket consumer. On connect() the consumer adds the user to a group. This group name is used by your csv processing function to send a message to the browser of any users connected to that group (in this case just one user) and update the html on your page.
def send_update(channel_layer, group_name, message):
async_to_sync(channel_layer.group_send)(
group_name,
{
'type': 'update_html',
'status': message
}
)
def process_csv(file):
channel_layer = get_channel_layer()
group_name = get_user_name() # function to get same group name as in connect()
with open(file) as f:
reader=csv.reader(f)
send_update(channel_layer, group_name, 'Opened file')
for row in reader:
send_update(channel_layer, group_name, 'Processing Row#: %s' % row)
You would include javascript on your page as outlined in the Channels documentation then have an extra onmessage function fro updating the html:
var WebSocket = new ReconnectiongWebSocket(...);
WebSocket.onmessage = function(e) {
var data = JSON.parse(e.data);
$('#htmlToReplace').html(data['status']);
}

How does jQuery Autocomplete dynamically filter responses

I am currently using http://www.devbridge.com/sourcery/components/jquery-autocomplete/#jquery-autocomplete to autocomplete input.
My question is: how is the demo at the above link automatically filtering results?
If I use a local datastore, it filters the results for me.
<script>
var suggestions = [ "Afghan",
"African",
"Senegalese",
"American",
"Arabian",
"Arab Pizza",
"Argentine",
"Armenian",
"Asian Fusion",
"Asturian",
"Australian",
"Austrian"
]
$('#categories').autocomplete({
// serviceUrl: '/autocomplete/categories',
lookup: suggestions,
delimiter: ',',
maxHeight: 200,
minChars: 2
});
</script>
However, if I instead replace "lookup:" with an external datastore (serviceUrl), the results are no longer filtered.
Here's my code for the external-calls version:
class AjaxHandler(webapp2.RequestHandler):
def __init__(self, request, response):
self.initialize(request, response)
self.categories = []
with open("static/categories.data") as categories_file:
for entry in categories_file:
self.categories.append(str(entry))
print entry
def get(self):
suggestions = {"suggestions": self.categories}
self.response.write(json.dumps(suggestions))
self.response.headers.add_header("Content-Type", "application/json; charset-UTF-8")
With this version, it's still doing an edit-distance with all of the entries, but filtering is no longer working.
Here is their API: https://github.com/devbridge/jQuery-Autocomplete
There's a bunch of options there, and if anyone can give me some pointers to which one might help, that'd be great.
That demo isn't using an external data source.
But I'm not sure what you're asking: the whole point of using an external data source is that it's the source that does the filtering - it only returns values that match the token that is sent with the Ajax get. Otherwise you might as well include all the data in the original request.
When you trying to request to other server with your javascript usually it will be blocked by Web Browser because of Security Concern. (You can google by keyword Cross domain javascript request)
If you using java, you can create some java code - Controller or Servlet (not javascript) that request to other Server and pass it to your html (Just like a bridge). Or you can do same thing if you using with PHP or Python.

Using PUT/POST/DELETE with JSONP and jQuery

I am working on creating a RESTful API that supports cross-domain requests, JSON/JSONP support, and the main HTTP method (PUT/GET/POST/DELETE). Now while will be easy to accessing this API through server side code , it would nice to exposed it to javascript. From what I can tell, when doing a JSONP requests with jQuery, it only supports the GET method. Is there a way to do a JSONP request using POST/PUT/DELETE?
Ideally I would like a way to do this from within jQuery (through a plugin if the core does not support this), but I will take a plain javascript solution too. Any links to working code or how to code it would be helpful, thanks.
Actually - there is a way to support POST requests.
And there is no need in a PROXI server - just a small utility HTML page that is described bellow.
Here's how you get Effectively a POST cross-domain call, including attached files and multi-part and all :)
Here first are the steps in understanding the idea, after that - find an implementation sample.
How JSONP of jQuery is implemented, and why doesn't it support POST requests?
While the traditional JSONP is implemented by creating a script element and appending it into the DOM - what results inforcing the browser to fire an HTTP request to retrieve the source for the tag, and then execute it as JavaScript, the HTTP request that the browser fires is simple GET.
What is not limited to GET requests?
A FORM. Submit the FORM while specifing action the cross-domain server.
A FORM tag can be created completely using a script, populated with all fields using script, set all necessary attributes, injected into the DOM, and then submitted - all using script.
But how can we submit a FORM without refreshing the page?
We specify the target the form to an IFRAME in the same page.
An IFRAME can also be created, set, named and injected to the DOM using script.
But How can we hide this work from the user?
We'll contain both FORM and IFRAME in a hidden DIV using style="display:none"
(and here's the most complicated part of the technique, be patient)
But IFRAME from another domain cannot call a callback on it's top-level document. How to overcome that?
Indeed , if a response from FORM submit is a page from another domain, any script communication between the top-level page and the page in the IFRAME results in "access denied". So the server cannot callback using a script. What can the server can do? redirect. The server may redirect to any page - including pages in the same domain as the top-level document - pages that can invoke the callback for us.
How can a server redirect?
two ways:
Using client side script like <Script>location.href = 'some-url'</script>
Using HTTP-Header. See: http://www.webconfs.com/how-to-redirect-a-webpage.php
So I end up with another page? How does it help me?
This is a simple utility page that will be used from all cross-domain calls. Actually, this page is in-fact a kind of a proxi, but it is not a server, but a simple and static HTML page, that anybody with notepad and a browser can use.
All this page has to do is invoke the callback on the top-level document, with the response-data from the server. Client-Side scripting has access to all URL parts, and the server can put it's response there encoded as part of it, as well as the name of the callback that has to be invoked. Means - this page can be a static and HTML page, and does not have to be a dynamic server-side page :)
This utility page will take the information from the URL it runs in - specifically in my implementation bellow - the Query-String parameters (or you can write your own implementation using anchor-ID - i.e the part of a url right to the "#" sign). And since this page is static - it can be even allowed to be cached :)
Won't adding for every POST request a DIV, a SCRIPT and an IFRAME eventually leak memory?
If you leave it in the page - it will. If you clean after you - it will not. All we have to do is give an ID to the DIV that we can use to celan-up the DIV and the FORM and IFRAME inside it whenever the response arrives from the server, or times out.
What do we get?
Effectively a POST cross-domain call, including attached files and multi-part and all :)
What are the limits?
The server response is limited to whatever fits into a redirection.
The server must ALWAYS return a REDIRECT to a POST requests. That include 404 and 500 errors.
Alternatively - create a timeout on the client just before firing the request, so you'll have a chance to detect requests that have not returned.
not everybody can understand all this and all the stages involved. it's a kind of an infrastructure level work, but once you get it running - it rocks :)
Can I use it for PUT and DELETE calls?
FORM tag does not PUT and DELETE.
But that's better then nothing :)
Ok, got the concept. How is it done technically?
What I do is:
I create the DIV, style it as invisible, and append it to the DOM. I also give it an ID that I can clean it up from the DOM after the server response has arrived (the same way JQuery cleans it's JSONP SCRIPT tasgs - but the DIV).
Then I compose a string that contains both IFRAME and FORM - with all attributes, properties and input fields, and inject it into the invisible DIV. it is important to inject this string into the DIV only AFTER the div is in the DOM. If not - it will not work on all browsers.
After that - I obtain a reference to the FORM and submit it.
Just remember one line before that - to set a Timeout callback in case the server does not respond, or responds in a wrong way.
The callback function contains the clean-up code. It is also called by timer in case of a response-timeout (and cleans it's timeout-timer when a server response arrives).
Show me the code!
The code snippet bellow is totally "neutral" on "pure" javascript, and declares whatever utility it needs. Just for simplification of explaining the idea - it all runs on the global scope, however it should be a little more sophisticated...
Organize it in functions as you may and parameterize what you need - but make sure that all parts that need to see each other run on the same scope :)
For this example - assume the client runs on http://samedomain.com and the server runs on http://crossdomain.com.
The script code on the top-level document
//declare the Async-call callback function on the global scope
function myAsyncJSONPCallback(data){
//clean up
var e = document.getElementById(id);
if (e) e.parentNode.removeChild(e);
clearTimeout(timeout);
if (data && data.error){
//handle errors & TIMEOUTS
//...
return;
}
//use data
//...
}
var serverUrl = "http://crossdomain.com/server/page"
, params = { param1 : "value of param 1" //I assume this value to be passed
, param2 : "value of param 2" //here I just declare it...
, callback: "myAsyncJSONPCallback"
}
, clientUtilityUrl = "http://samedomain.com/utils/postResponse.html"
, id = "some-unique-id"// unique Request ID. You can generate it your own way
, div = document.createElement("DIV") //this is where the actual work start!
, HTML = [ "<IFRAME name='ifr_",id,"'></IFRAME>"
, "<form target='ifr_",id,"' method='POST' action='",serverUrl
, "' id='frm_",id,"' enctype='multipart/form-data'>"
]
, each, pval, timeout;
//augment utility func to make the array a "StringBuffer" - see usage bellow
HTML.add = function(){
for (var i =0; i < arguments.length; i++)
this[this.length] = arguments[i];
}
//add rurl to the params object - part of infrastructure work
params.rurl = clientUtilityUrl //ABSOLUTE URL to the utility page must be on
//the SAME DOMAIN as page that makes the request
//add all params to composed string of FORM and IFRAME inside the FORM tag
for(each in params){
pval = params[each].toString().replace(/\"/g,""");//assure: that " mark will not break
HTML.add("<input name='",each,"' value='",pval,"'/>"); // the composed string
}
//close FORM tag in composed string and put all parts together
HTML.add("</form>");
HTML = HTML.join(""); //Now the composed HTML string ready :)
//prepare the DIV
div.id = id; // this ID is used to clean-up once the response has come, or timeout is detected
div.style.display = "none"; //assure the DIV will not influence UI
//TRICKY: append the DIV to the DOM and *ONLY THEN* inject the HTML in it
// for some reason it works in all browsers only this way. Injecting the DIV as part
// of a composed string did not always work for me
document.body.appendChild(div);
div.innerHTML = HTML;
//TRICKY: note that myAsyncJSONPCallback must see the 'timeout' variable
timeout = setTimeout("myAsyncJSONPCallback({error:'TIMEOUT'})",4000);
document.getElementById("frm_"+id+).submit();
The server on the cross-domain
The response from the server is expected to be a REDIRECTION, either by HTTP-Header or by writing a SCRIPT tag. (redirection is better, SCRIPT tag is easier to debug with JS breakpoints).
Here's the example of the header, assuming the rurl value from above
Location: http://samedomain.com/HTML/page?callback=myAsyncJSONPCallback&data=whatever_the_server_has_to_return
Note that
the value of the data argument can be a JavaScript Object-Literal or JSON expression, however it better be url-encoded.
the length of the server response is limited to the length of a URL a browser can process.
Also - in my system the server has a default value for the rurl so that this parameter is optional. But you can do that only if your client-application and server-application are coupled.
APIs to emit redirection header:
http://www.webconfs.com/how-to-redirect-a-webpage.php
Alternatively, you can have the server write as a response the following:
<script>
location.href="http://samedomain.com/HTML/page?callback=myAsyncJSONPCallback&data=whatever_the_server_has_to_return"
</script>
But HTTP-Headers would be considered more clean ;)
The utility page on the same domain as the top-level document
I use the same utility page as rurl for all my post requests: all it does is take the name of the callback and the parameters from the Query-String using client side code, and call it on the parent document. It can do it ONLY when this page runs in the EXACT same domain as the page that fired the request! Important: Unlike cookies - subdomains do not count!! It has to he the exact same domain.
It's also make it more efficient if this utility page contains no references to other resources -including JS libraries. So this page is plain JavaScript. But you can implement it however you like.
Here's the responder page that I use, who's URL is found in the rurl of the POST request (in the example: http://samedomain.com/utils/postResponse.html )
<html><head>
<script type="text/javascript">
//parse and organize all QS parameters in a more comfortable way
var params = {};
if (location.search.length > 1) {
var i, arr = location.search.substr(1).split("&");
for (i = 0; i < arr.length; i++) {
arr[i] = arr[i].split("=");
params[arr[i][0]] = unescape(arr[i][1]);
}
}
//support server answer as JavaScript Object-Literals or JSON:
// evaluate the data expression
try {
eval("params.data = " + params.data);
} catch (e) {
params.data = {error: "server response failed with evaluation error: " + e.message
,data : params.data
}
}
//invoke the callback on the parent
try{
window.parent[ params.callback ](params.data || "no-data-returned");
}catch(e){
//if something went wrong - at least let's learn about it in the
// console (in addition to the timeout)
throw "Problem in passing POST response to host page: \n\n" + e.message;
}
</script>
</head><body></body></html>
It's not much automation and 'ready-made' library like jQuery and involes some 'manual' work - but it has the charm :)
If you're a keen fan of ready-made libraries - you can also check on Dojo Toolkit that when last I checked (about a year ago) - had their own implementation for the same mechanism.
http://dojotoolkit.org/
Good luck buddy, I hope it helps...
Is there a way to do a JSONP request using POST/PUT/DELETE?
No there isn't.
No. Consider what JSONP is: an injection of a new <script> tag in the document. The browser performs a GET request to pull the script pointed to by the src attribute. There's no way to specify any other HTTP verb when doing this.
Rather than banging our heads with JSONP method, that actually won't
support POST method by default, we can go for CORS .That will provide no big changes in the conventional way of programming. By simple Jquery Ajax call we can go with cross domains.
In CORS method, you have to add headers in server side scripting file, or in the server itself(in remote domain), for enabling this access. This is much reliable, since we can prevent/restrict the domains making unwanted calls.
It can be found in detail in wikipedia page.

Categories