finding the ajax request in dojo - javascript

I am working on a crawlers to scrap all data from the website. they use ajax for pagination. I found this on the href of the page numbers
javascript:dojo.publish("showResultsForPageNumber",[{pageNumber:"4",pageSize:"15", linkId:"WC_SearchBasedNavigationResults_pagination_link_4_categoryResults"}])
what is happening here. I am not aware of these dojo. Can any one help me to find the corresponding server script so that i can scrap all the data including pagination.
update#1
in the console i found
this is the code where it is redirected.
showResultsPage:function(data){
var pageNumber = data['pageNumber'];
var pageSize = data['pageSize'];
pageNumber = dojo.number.parse(pageNumber);
pageSize = dojo.number.parse(pageSize);
setCurrentId(data["linkId"]);
if(!submitRequest()){
return;
}
console.debug(wc.render.getContextById('searchBasedNavigation_context').properties); //line 773
var beginIndex = pageSize * ( pageNumber - 1 );
cursor_wait();
wc.render.updateContext('searchBasedNavigation_context', {"productBeginIndex": beginIndex,"resultType":"products"});
this.updateHistory();
MessageHelper.hideAndClearMessage();
},

It's part of the publisher/subscriber part of the Dojo framework and does not say anything about the executed AJAX request.
If you're not familiar with the publisher/subscriber pattern, then let's explain that first. To decouple certain components/parts of the application, this pattern is commonly used.
On one side, someone publishes information, while on the other side (= some other part of the application) someone listens to it.
In this case, the following data is published (= second parameter):
[{
pageNumber: "4",
pageSize: "15",
linkId: "WC_SearchBasedNavigationResults_pagination_link_4_categoryResults"
}]
Obviously, not all subscribers in the application need to know about this data, so there's a topic system, in this case, the data is published to a topic called "showResultsForPageNumber"(= first parameter)
To know what happens next, you will have to look through the code for someone who subscribes to that topic. So somewhere in the code you will find something like this:
dojo.subscribe("showResultsForPageNumber", function(data) {
// Does something with the data, perhaps an AJAX call?
});
TO answer your question, look in the code for something like: dojo.subscribe("showResultsForPageNumber", as it will tell you what happens next.
However, if you're just interested in the AJAX calls, it will be easier to check the network requests, if you're using Google Chrome/Mozilla Firefox/... you can use the F12 key to open your developer tools, then select the Network tab and activate if necessaray. Now click on the pagination controls and you will see a log of all network traffic and the request + response data.

Here you are publishing the topic with name "showResultsForPageNumber" where "pageNumber", "pageSize", "linkId" are properties of object of your argument array.
See following link: ref1 ref2

Related

Web scraping in R by first navigating through a JavaScript module

I looked up various questions and answers but unfortunately none of the problems I found dealt with a case that is similar to mine. In a typical question, the JavaScript table builds up directly when the website is loaded. In my case, however, I first have to navigate through the JavaScript module and select several criteria before I get the sought-after result.
This is my case: I have to scrape the exchange rates for various currencies from this website www.globocambio.co. To do that, I have (1) to navigate to “I WANT COLOMBIAN PESO”, (2) select the currency (e.g., “Chilean Peso”), (3) and the collection destination (e.g., “El Dorado International Airport”). Only then the respective exchange rate is being loaded. See this screenshot for illustration. I marked the three selection steps red. Green is the data point that I want to scrape for different currencies.
I am not very familiar with JavaScript but I tried to understand what is going on. Here is what I found out:
Using Chrome DevTools, I investigated the Network activity when loading an exchange rate. There is an XHR called “GetPrice” that requests the price using this URL: https://reservations.globocambio.co/DesktopModules/GlobalExchange/API/Widget/GetPrice and using the following Form Data
ISOAOrigen=CLP&cantidadOrigen=9000&ISOADestino=COP&cantidadDestino=0&centerId=27&operationType=OperationTypesBuying
I understand that the Form Data contains the information that I initially selected manually:
operationType=OperationTypesBuying: this is the “I WANT COLOMBIAN PESO” option
ISOAOrigen=CLP: this is the “Chilean Peso”
centerId=27: this is the “El Dorado International Airport”
The server responds to my request with the following information:
{“MonedaOrigen":{"ISOA":"CLP","Nombre":null,"Margen":0.1630000000,"Tramo":0.0,"Fixing":2.9000000000},"CantidadOrigen":9000.00,"MonedaDestino":{"ISOA":"COP","Nombre":null,"Margen":0.0,"Tramo":0.0,"Fixing":0.0},"CantidadDestino":21845.70,"TipoCambio":2.42730000000000000000,"MargenOrigen":0.0,"TramoOrigen":0.0,"FixingOrigen":0.0,"MargenDestino":0.0,"TramoDestino":0.0,"FixingDestino":0.0,"IdCentro":"27","Comision":null,"ComisionTramoSuperior":null,"ComisionAplicada":{"CodigoMoneda":null,"CodigoTipoMoneda":0,"ComisionFija":0.0,"ComisionVariable":0.0,"TramoInicio":0.0,"TramoFin":null,"Orden”:0}}
From this response, "TipoCambio":2.42730000000000000000 is then being written on the website using this line of HTML code: <span id="spTipoCambioCompra">2.427300</span>
This means that "TipoCambio" is the value that I am looking for.
So, I have to communicate somehow via R with the server using the Form Data as input variables. Can anyone tell me how to do this?
I mean, understand that I have to combine the URL https://reservations.globocambio.co/DesktopModules/GlobalExchange/API/Widget/GetPrice with the Form Data “ISOAOrigen=CLP&cantidadOrigen=9000&ISOADestino=COP&cantidadDestino=0&centerId=27&operationType=OperationTypesBuying” somehow but I do not know how it works..
Any help will be appreciated!
Update:
I still have no idea how to solve the above issue, yet. However, I try to approach it with small steps.
Using RSelenium, I am currently trying to find out how to click on the option “I WANT COLOMBIAN PESO”. My idea was to use the following code:
library(RSelenium)
remDr <- RSelenium::remoteDriver(remoteServerAddr = "localhost",
port = 4445L,
browserName = "chrome")
remDr$open()
remDr$navigate("https://www.globocambio.co/en/home")
webElem <- remDr$findElement("id", "tabCompra") #What is wrong here?
webElem$clickElement() # Click on "I WANT COLOMBIAN PESO"
But I get an error message after executing webElem <- remDr$findElement("id", "tabCompra"):
Selenium message:no such element: Unable to locate element: {"method":"css selector","selector":"#tabCompra"}
(Session info: chrome=81.0.4044.113)
For documentation on this error, please visit: https://www.seleniumhq.org/exceptions/no_such_element.html
...
Error: Summary: NoSuchElement
Detail: An element could not be located on the page using the given search parameters.
class: org.openqa.selenium.NoSuchElementException
Further Details: run errorDetails method
What am I doing wrong here?
I solved my problem using selenium in Python:
from selenium import webdriver
driver = webdriver.Firefox(executable_path = '/your_path/geckodriver')
driver.get("https://www.globocambio.co/en/")
driver.switch_to.frame("iframeWidget");
elem = driver.find_element_by_id('tabCompra')
elem.click()
elem = driver.find_element_by_id('inputddlMonedaOrigenCompra')
elem.click()
elem.send_keys(Keys.CLEAR)
elem.send_keys("Chilean Peso")
elem.send_keys(Keys.ENTER)
elem.send_keys(Keys.ARROW_DOWN)
elem.send_keys(Keys.RETURN)
elem = driver.find_element_by_id('info-change-compra')
print(elem.text)

Using Wikia API

I am trying to access the X-men API on wikia, to try and extract the name and image of each character, to then be used on a SPA using javascript.
This is the link too the page on the wiki:
http://x-men.wikia.com/wiki/Category:Characters
I cannot for the life of me figure out how to access the API. It doesn't seem to be RESFTful, and that's all I have any experience in.
Has anyone used the Wikia API successfully before? I can get some articles and such, but nothing useful.
(The documentation is shocking, been searching around for hours.)
Probably you have already found a solution, but I think you should write something like this:
import requests
xmen_url = "http://x-men.wikia.com/api/v1/Articles/List?expand=1&category=Characters&limit=10000"
r = requests.get(xmen_url)
response = r.json()
# print response
a = 0
for item in response['items']:
a += 1
print("{}\t{}\t({})".format(str(a),item['title'].encode(encoding='utf-8'),item['id']))
This will print a list of all the articles of the category Characters (I think there also some subcategories, you should check). If you want to take a deeper look at the json file you can uncomment the commented code.
Hope it helps.

Can you retrieve the Collection Event Script in Deployd as a string for documentation?

I have been using Deployd for a week or so, and was curious if I could expose the contents of the Collections Event Script itself, from the API. (the contents of the /my-project/resources/my-collection/get.js file itself)
This could be useful to automatically produce documentation of the scripts being applied to Get, Post and other requests.
Thanks for the help,
Jacob
This is what I have so far: If I start at localhost:2404/dashboard , I can run the following code in the Chrome Console to retrieve the string content of the GET Event on the collection Tshirts:
dpd('__resources').get(Context.resourceId + '/' + 'get.js', function(res, err)
{
_events['get'] = res && res.value;
console.log(res.value);
});
Context.resourceId simplifies to the collection ID which is just "tshirts".
This successfully outputs the data I am trying to access, but I wonder if it is possible to retrieve from the API. I imagine I need to dig into Node.JS in general to wrap my head around this.
Thanks again,
Jacob

Get Twitter Feed as JSON without authentication

I wrote a small JavaScript a couple of years ago that grabbed a users (mine) most recent tweet and then parsed it out for display including links, date etc.
It used this json call to retrieve the tweets and it no longer works.
http://twitter.com/statuses/user_timeline/radfan.json
It now returns the error:
{"errors":[{"message":"Sorry, that page does not exist","code":34}]}
I have looked at using the api version (code below) but this requires authentication which I would rather avoid having to do as it is just to display my latest tweet on my website which is public anyway on my profile page:
http://api.twitter.com/1/statuses/radfan.json
I haven't kept up with Twitter's API changes as I no longer really work with it, is there a way round this problem or is it no longer possible?
Previously the Search API was the only Twitter API that didn't require some form of OAuth. Now it does require auth.
Twitter's Search API is acquired from a third party acquisition - they rarely support it and are seemingly unenthused that it even exists. On top of that, there are many limitations to the payload, including but not limited to a severely reduced set of key:value pairs in the JSON or XML file you get back.
When I heard this, I was shocked. I spent a LONG time figuring out how to use the least amount of code to do a simple GET request (like displaying a timeline).
I decided to go the OAuth route to be able to ensure a relevant payload. You need a server-side language to do this. JavaScript is visible to end users, and thus it's a bad idea to include the necessary keys and secrets in a .js file.
I didn't want to use a big library so the answer for me was PHP and help from #Rivers' answer here. The answer below it by #lackovic10 describes how to include queries in your authentication.
I hope this helps others save time thinking about how to go about using Twitter's API with the new OAuth requirement.
You can access and scrape Twitter via advanced search without being logged in:
https://twitter.com/search-advanced
GET request
When performing a basic search request you get:
https://twitter.com/search?q=Babylon%205&src=typd
q (our query encoded)
src (assumed to be the source of the query, i.e. typed)
by default, Twitter returns top 25 results, but if you click on
all you can get the realtime tweets:
https://twitter.com/search?f=realtime&q=Babylon%205&src=typd
JSON contents
More Tweets are loaded on the page via AJAX:
https://twitter.com/i/search/timeline?f=realtime&q=Babylon%205&src=typd&include_available_features=1&include_entities=1&last_note_ts=85&max_position=TWEET-553069642609344512-553159310448918528-BD1UO2FFu9QAAAAAAAAETAAAAAcAAAASAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
Use max_position to request the next tweets
The following json array returns all you need to scrape the contents:
https://twitter.com/i/search/timeline?f=realtime&q=Babylon%205&src=typd
has_more_items (bool)
items_html (html)
max_position (key)
refresh_cursor (key)
DOM elements
Here comes a list of DOM elements you can use to extract
The authors twitter handle
div.original-tweet[data-tweet-id]
The name of the author
div.original-tweet[data-name]
The user ID of the author
div.original-tweet[data-user-id]
Timestamp of the post
span._timestamp[data-time]
Timestamp of the post in ms
span._timestamp[data-time-ms]
Text of Tweet
p.tweet-text
Number of Retweets
span.ProfileTweet-action–retweet > span.ProfileTweet-actionCount[data-tweet-stat-count]
Number of Favo
span.ProfileTweet-action–favorite > span.ProfileTweet-actionCount[data-tweet-stat-count]
Resources
https://code.recuweb.com/2015/scraping-tweets-directly-from-twitter-without-authentication/
If you're still looking for unauthenticated tweets in JSON, this should work:
https://github.com/cosmocatalano/tweet-2-json
As you can see in the documentation, using the REST API you'll need OAuth Tokens in order to do this. Luckily, we can use the Search (which doesn't use OAuth) and use the from:[USERNAME] operator
Example:
http://search.twitter.com/search.json?q=from:marcofolio
Will give you a JSON object with tweets from that user, where
object.results[0]
will give you the last tweet.
Here is a quick hack (really a hack, should be used with caution as its not future proof) which uses http://anyorigin.com to scrape twitter site for the latest tweets.
http://codepen.io/JonOlick/pen/XJaXBd
It works by using anyorigin (you have to pay to use it) to grab the HTML. It then parses the HTML using jquery to extract out the relevant tweets.
Tweets on the mobile site use a div with the class .tweet-text, so this is pretty painless.
The relevant code looks like this:
$.getJSON('http://anyorigin.com/get?url=mobile.twitter.com/JonOlick&callback=?', function(data){
// Remap ... utf8 encoding to ascii.
var bar = data.contents;
bar = bar.replace(/…/g, '...');
var el = $( '<div></div>' );
el.html(bar);
// Change all links to point back at twitter
$('.twitter-atreply', el).each(function(i){
$(this).attr('href', "https://twitter.com" + $(this).attr('href'))
});
// For all tweets
$('.tweet-text', el).each(function(i){
// We only care about the first 4 tweets
if(i < 4) {
var foo = $(this).html();
$('#test').html($('#test').html() + "<div class=ProfileTweet><div class=ProfileTweet-contents>" + foo + "</div></div><br>");
}
});
});
You can use a Twitter API wrapper, such as TweetJS.com which offers a limited set of the Twitter API's functionality, but does not require authentication. It's called like this;
TweetJs.ListTweetsOnUserTimeline("PetrucciMusic",
function (data) {
console.log(data);
});
You can use the twitter api v1 to take the tweets without using OAuth. For example: this link turns #jack's last 100 tweets.
The timeline documentation is here.
The method "GET statuses/user_timeline" need a user Authentification like you can see on the official documentation :
You can use the search method "GET search" wich not require authentification.
You have a code for starting here : http://jsfiddle.net/73L4c/6/
function searchTwitter(query) {
$.ajax({
url: 'http://search.twitter.com/search.json?' + jQuery.param(query),
dataType: 'jsonp',
success: function(data) {
var tweets = $('#tweets');
tweets.html('');
for (res in data['results']) {
tweets.append('<div>' + data['results'][res]['from_user'] + ' wrote: <p>' + data['results'][res]['text'] + '</p></div><br />');
}
}
});
}
$(document).ready(function() {
$('#submit').click(function() {
var params = {
q: $('#query').val(),
rpp: 5
};
// alert(jQuery.param(params));
searchTwitter(params);
});
})

Using PUT/POST/DELETE with JSONP and jQuery

I am working on creating a RESTful API that supports cross-domain requests, JSON/JSONP support, and the main HTTP method (PUT/GET/POST/DELETE). Now while will be easy to accessing this API through server side code , it would nice to exposed it to javascript. From what I can tell, when doing a JSONP requests with jQuery, it only supports the GET method. Is there a way to do a JSONP request using POST/PUT/DELETE?
Ideally I would like a way to do this from within jQuery (through a plugin if the core does not support this), but I will take a plain javascript solution too. Any links to working code or how to code it would be helpful, thanks.
Actually - there is a way to support POST requests.
And there is no need in a PROXI server - just a small utility HTML page that is described bellow.
Here's how you get Effectively a POST cross-domain call, including attached files and multi-part and all :)
Here first are the steps in understanding the idea, after that - find an implementation sample.
How JSONP of jQuery is implemented, and why doesn't it support POST requests?
While the traditional JSONP is implemented by creating a script element and appending it into the DOM - what results inforcing the browser to fire an HTTP request to retrieve the source for the tag, and then execute it as JavaScript, the HTTP request that the browser fires is simple GET.
What is not limited to GET requests?
A FORM. Submit the FORM while specifing action the cross-domain server.
A FORM tag can be created completely using a script, populated with all fields using script, set all necessary attributes, injected into the DOM, and then submitted - all using script.
But how can we submit a FORM without refreshing the page?
We specify the target the form to an IFRAME in the same page.
An IFRAME can also be created, set, named and injected to the DOM using script.
But How can we hide this work from the user?
We'll contain both FORM and IFRAME in a hidden DIV using style="display:none"
(and here's the most complicated part of the technique, be patient)
But IFRAME from another domain cannot call a callback on it's top-level document. How to overcome that?
Indeed , if a response from FORM submit is a page from another domain, any script communication between the top-level page and the page in the IFRAME results in "access denied". So the server cannot callback using a script. What can the server can do? redirect. The server may redirect to any page - including pages in the same domain as the top-level document - pages that can invoke the callback for us.
How can a server redirect?
two ways:
Using client side script like <Script>location.href = 'some-url'</script>
Using HTTP-Header. See: http://www.webconfs.com/how-to-redirect-a-webpage.php
So I end up with another page? How does it help me?
This is a simple utility page that will be used from all cross-domain calls. Actually, this page is in-fact a kind of a proxi, but it is not a server, but a simple and static HTML page, that anybody with notepad and a browser can use.
All this page has to do is invoke the callback on the top-level document, with the response-data from the server. Client-Side scripting has access to all URL parts, and the server can put it's response there encoded as part of it, as well as the name of the callback that has to be invoked. Means - this page can be a static and HTML page, and does not have to be a dynamic server-side page :)
This utility page will take the information from the URL it runs in - specifically in my implementation bellow - the Query-String parameters (or you can write your own implementation using anchor-ID - i.e the part of a url right to the "#" sign). And since this page is static - it can be even allowed to be cached :)
Won't adding for every POST request a DIV, a SCRIPT and an IFRAME eventually leak memory?
If you leave it in the page - it will. If you clean after you - it will not. All we have to do is give an ID to the DIV that we can use to celan-up the DIV and the FORM and IFRAME inside it whenever the response arrives from the server, or times out.
What do we get?
Effectively a POST cross-domain call, including attached files and multi-part and all :)
What are the limits?
The server response is limited to whatever fits into a redirection.
The server must ALWAYS return a REDIRECT to a POST requests. That include 404 and 500 errors.
Alternatively - create a timeout on the client just before firing the request, so you'll have a chance to detect requests that have not returned.
not everybody can understand all this and all the stages involved. it's a kind of an infrastructure level work, but once you get it running - it rocks :)
Can I use it for PUT and DELETE calls?
FORM tag does not PUT and DELETE.
But that's better then nothing :)
Ok, got the concept. How is it done technically?
What I do is:
I create the DIV, style it as invisible, and append it to the DOM. I also give it an ID that I can clean it up from the DOM after the server response has arrived (the same way JQuery cleans it's JSONP SCRIPT tasgs - but the DIV).
Then I compose a string that contains both IFRAME and FORM - with all attributes, properties and input fields, and inject it into the invisible DIV. it is important to inject this string into the DIV only AFTER the div is in the DOM. If not - it will not work on all browsers.
After that - I obtain a reference to the FORM and submit it.
Just remember one line before that - to set a Timeout callback in case the server does not respond, or responds in a wrong way.
The callback function contains the clean-up code. It is also called by timer in case of a response-timeout (and cleans it's timeout-timer when a server response arrives).
Show me the code!
The code snippet bellow is totally "neutral" on "pure" javascript, and declares whatever utility it needs. Just for simplification of explaining the idea - it all runs on the global scope, however it should be a little more sophisticated...
Organize it in functions as you may and parameterize what you need - but make sure that all parts that need to see each other run on the same scope :)
For this example - assume the client runs on http://samedomain.com and the server runs on http://crossdomain.com.
The script code on the top-level document
//declare the Async-call callback function on the global scope
function myAsyncJSONPCallback(data){
//clean up
var e = document.getElementById(id);
if (e) e.parentNode.removeChild(e);
clearTimeout(timeout);
if (data && data.error){
//handle errors & TIMEOUTS
//...
return;
}
//use data
//...
}
var serverUrl = "http://crossdomain.com/server/page"
, params = { param1 : "value of param 1" //I assume this value to be passed
, param2 : "value of param 2" //here I just declare it...
, callback: "myAsyncJSONPCallback"
}
, clientUtilityUrl = "http://samedomain.com/utils/postResponse.html"
, id = "some-unique-id"// unique Request ID. You can generate it your own way
, div = document.createElement("DIV") //this is where the actual work start!
, HTML = [ "<IFRAME name='ifr_",id,"'></IFRAME>"
, "<form target='ifr_",id,"' method='POST' action='",serverUrl
, "' id='frm_",id,"' enctype='multipart/form-data'>"
]
, each, pval, timeout;
//augment utility func to make the array a "StringBuffer" - see usage bellow
HTML.add = function(){
for (var i =0; i < arguments.length; i++)
this[this.length] = arguments[i];
}
//add rurl to the params object - part of infrastructure work
params.rurl = clientUtilityUrl //ABSOLUTE URL to the utility page must be on
//the SAME DOMAIN as page that makes the request
//add all params to composed string of FORM and IFRAME inside the FORM tag
for(each in params){
pval = params[each].toString().replace(/\"/g,""");//assure: that " mark will not break
HTML.add("<input name='",each,"' value='",pval,"'/>"); // the composed string
}
//close FORM tag in composed string and put all parts together
HTML.add("</form>");
HTML = HTML.join(""); //Now the composed HTML string ready :)
//prepare the DIV
div.id = id; // this ID is used to clean-up once the response has come, or timeout is detected
div.style.display = "none"; //assure the DIV will not influence UI
//TRICKY: append the DIV to the DOM and *ONLY THEN* inject the HTML in it
// for some reason it works in all browsers only this way. Injecting the DIV as part
// of a composed string did not always work for me
document.body.appendChild(div);
div.innerHTML = HTML;
//TRICKY: note that myAsyncJSONPCallback must see the 'timeout' variable
timeout = setTimeout("myAsyncJSONPCallback({error:'TIMEOUT'})",4000);
document.getElementById("frm_"+id+).submit();
The server on the cross-domain
The response from the server is expected to be a REDIRECTION, either by HTTP-Header or by writing a SCRIPT tag. (redirection is better, SCRIPT tag is easier to debug with JS breakpoints).
Here's the example of the header, assuming the rurl value from above
Location: http://samedomain.com/HTML/page?callback=myAsyncJSONPCallback&data=whatever_the_server_has_to_return
Note that
the value of the data argument can be a JavaScript Object-Literal or JSON expression, however it better be url-encoded.
the length of the server response is limited to the length of a URL a browser can process.
Also - in my system the server has a default value for the rurl so that this parameter is optional. But you can do that only if your client-application and server-application are coupled.
APIs to emit redirection header:
http://www.webconfs.com/how-to-redirect-a-webpage.php
Alternatively, you can have the server write as a response the following:
<script>
location.href="http://samedomain.com/HTML/page?callback=myAsyncJSONPCallback&data=whatever_the_server_has_to_return"
</script>
But HTTP-Headers would be considered more clean ;)
The utility page on the same domain as the top-level document
I use the same utility page as rurl for all my post requests: all it does is take the name of the callback and the parameters from the Query-String using client side code, and call it on the parent document. It can do it ONLY when this page runs in the EXACT same domain as the page that fired the request! Important: Unlike cookies - subdomains do not count!! It has to he the exact same domain.
It's also make it more efficient if this utility page contains no references to other resources -including JS libraries. So this page is plain JavaScript. But you can implement it however you like.
Here's the responder page that I use, who's URL is found in the rurl of the POST request (in the example: http://samedomain.com/utils/postResponse.html )
<html><head>
<script type="text/javascript">
//parse and organize all QS parameters in a more comfortable way
var params = {};
if (location.search.length > 1) {
var i, arr = location.search.substr(1).split("&");
for (i = 0; i < arr.length; i++) {
arr[i] = arr[i].split("=");
params[arr[i][0]] = unescape(arr[i][1]);
}
}
//support server answer as JavaScript Object-Literals or JSON:
// evaluate the data expression
try {
eval("params.data = " + params.data);
} catch (e) {
params.data = {error: "server response failed with evaluation error: " + e.message
,data : params.data
}
}
//invoke the callback on the parent
try{
window.parent[ params.callback ](params.data || "no-data-returned");
}catch(e){
//if something went wrong - at least let's learn about it in the
// console (in addition to the timeout)
throw "Problem in passing POST response to host page: \n\n" + e.message;
}
</script>
</head><body></body></html>
It's not much automation and 'ready-made' library like jQuery and involes some 'manual' work - but it has the charm :)
If you're a keen fan of ready-made libraries - you can also check on Dojo Toolkit that when last I checked (about a year ago) - had their own implementation for the same mechanism.
http://dojotoolkit.org/
Good luck buddy, I hope it helps...
Is there a way to do a JSONP request using POST/PUT/DELETE?
No there isn't.
No. Consider what JSONP is: an injection of a new <script> tag in the document. The browser performs a GET request to pull the script pointed to by the src attribute. There's no way to specify any other HTTP verb when doing this.
Rather than banging our heads with JSONP method, that actually won't
support POST method by default, we can go for CORS .That will provide no big changes in the conventional way of programming. By simple Jquery Ajax call we can go with cross domains.
In CORS method, you have to add headers in server side scripting file, or in the server itself(in remote domain), for enabling this access. This is much reliable, since we can prevent/restrict the domains making unwanted calls.
It can be found in detail in wikipedia page.

Categories