how to make http request using javascript in vxml? - javascript

How to make http request using javascript in vxml?
(generally src contains link of any xml file for data element . but in my case it is not necessary to be a xml file. so i think i can't use data element here.)

There is nothing in pure ECMAScript supported by VXML browsers (that I know of -- unless someone has significantly extended their browser from the standard) that allows anything like what you seem to be asking for, like XMLHttpRequest for regular web AJAX requests. However, as Kevin Junghans mentioned, you could make use of the element to fetch a document which is expected to be XML. Some browsers may have extensions to the VXML standard that allow you to specify the file type coming back, letting you pick either XML or JSON.
However, a more generalized solution, if you don't know beforehand what format the fetched document will be in, may be to write a wrapper XML web service which in turn requests the desired document, and wraps it in XML.
e.g.
<var name="docURI" expr="'http://someserver/some/doc.json'" />
<data name="documentContents" src="myservice.xml.php" namelist="docURI" />
and write myservice.xml.php to return something like
<?xml version="1.0"?>
<documentWrapper>content from doc.json</documentWrapper>

Related

xml data from API has multiple undefined name spaces - how to use xslt to display it in html?

I am using a REST api to get XML data from a database and am trying to display it in an html format using xslt. Unfortunately the xml data comes back with a few namespaces that are not defined. I can get the style sheet to work just fine on a local copy of the data if I strip the namespaces or define them. Striping the name spaces feels like a hack and no the correct way to do this.
this is essentially an example of the data I get back:
<root>
<entity:Entity ns1:atrib="foo">
<g:Value>foo1</g:value>
<g:Name>fooName</g:Name>
</entity:Entity>
xmlhttprequest methods in JS to get this information and XSLTProcessor to transform it then add it into a . It's not displaying the transformed information and i'm 100% positive it's the namespaces that is causing the issue.
I've googled everything I can think of with no luck. Road blocks like this are almost always due to me missing something fundamental.
XSLT will only operate on XML that is well-formed, and it requires all namespaces to be declared. If you want to process this data you should ideally fix it at source; if you can't do that you need to repair it before processing.
There are some XML parsers that allow you to process non-namespace-aware XML, and you could use such a parser as the basis of your repair tool, but this is such an unusual requirement that I'll have to leave you to research how to do that yourself.

Why can you access an image on server directly but need to use AJAX for other file types?

Why is it that in JavaScript you can get direct access to an image on the server just by specifying its url, e.g. by doing:
myImg = new Image();
myImg.src = "xxx.jpg";
But in order to read for example a binary file you would have to make an AJAX request to access it?
What's the difference, exactly? Web programming continues to be a mystery to me...
Certain types of files have native handlers in the form of HTML elements or JS constructors that intrinsically know what to do with those types of files.
An image is one such example. By specifying it as the src attribute of an HTML <img /> tag or as the src property of a JS Image instance, you are implicitly feeding it to a mechanism that knows what to do with the image's source code.
This is not the case, say, for a text file. There is no HTML element or JS constructor associated with the loading and interpretation of text files. That is not to say you can't make the request. The following, though nonsensical, will nonetheless make a successful request:
<img src='some/text/file.txt' />
Rather, to meaningfully use the response, you will need AJAX, since HTML/JavaScript couldn't possibly hope to know, natively, what you intend to do with that response.
[EDIT]
Furthermore, as Djizeus makes clear in the comment below, images loaded into <img /> elements or Image constructors do not give you access to their source code - they are merely output as image data to the page.
Any resource that HTML provides a native means to reference (images, scripts, stylesheets, video, audio, anything that can appear in an iframe, etc) doesn't require the use of Ajax.
Ajax just makes the raw data in the HTTP response available to JavaScript so you use it when you want to process the data with JS instead of using the browser's native handling (or lack of native handling) of it.
If you want to dynamically, at the browser-side, PROCESS the data arriving from a website, you need Ajax. Otherwise, you don't need that at all - but of course, some types of data may not "show" very well on the browser - all depends on what data it is.
JavaScript also has some builtin functions, for example, to handle images - it will then perform the "ajax" style requests inside the JS engine, rather than you having to do it manually. Ajax is a more generic method to access arbitrary data (including passing your own data in the request to the server, so you get the data for John Smith that is logged in, for example)

Extract javascript information from url with python

I have a URL that links to a javascript file, for example http://something.com/../x.js. I need to extract a variable from x.js
Is it possible to do this using python?
At the moment I am using urllib2.urlopen() but when I use .read() I get this lovely mess:
U�(��%y�d�<�!���P��&Y��iX���O�������<Xy�CH{]^7e� �K�\�͌h��,U(9\ni�A ��2dp}�9���t�<M�M,u�N��h�bʄ�uV�\��0�A1��Q�.)�A��XNc��$"SkD�y����5�)�B�t9�):�^6��`(���d��hH=9D5wwK'�E�j%�]U~��0U�~ʻ��)�pj��aA�?;n�px`�r�/8<?;�t��z�{��n��W
�s�������h8����i�߸#}���}&�M�K�y��h�z�6,�Xc��!:'D|�s��,�g$�Y��H�T^#`r����f����tB��7��X�%�.X\��M9V[Z�Yl�LZ[ZM�F���`D�=ޘ5�A�0�){Ce�L*�k���������5����"�A��Y�}���t��X�(�O�̓�[�{���T�V��?:�s�i���ڶ�8m��6b��d$��j}��u�D&RL�[0>~x�jچ7�
When I look in the dev tools to see the DOM, the only thing in the body is a string wrapped in tags. In the regular view that string is a json element.
.read() should give you the same thing you see in the "view source" window of your browser, so something's wrong. It looks like the HTTP response might be gzipped, but urllib2 doesn't support gzip. urllib2 also doesn't request gzipped data, so if this is the problem, the server is probably misconfigured, but I'm assuming that's out of your control.
I suggest using requests instead. requests automatically decompresses gzip-encoded responses, so it should solve this problem for you.
import requests
r = requests.get('https://something.com/x.js')
r.text # unparsed json output, shouldn't be garbled
r.json() # parses json and returns a dictionary
In general, requests is much easier to use than urllib2 so I suggest using it everywhere, unless you absolutely must stick to the standard library.
import json
js = urllib2.urlopen("http://something.com/../x.js").read()
data = json.loads(js)

Getting real source code with javascript?

OK I don't use js enough to know, but is there a way to get the real source code of the page with it?
document.body.innerHTML for example gives some kind of "fixed up" version where malformed tags have been removed.
I'm guessing using XMLHttpRequest on the original page might work, but seems kind of stupid.
This happens because browsers parse the DOM and don't keep the HTML in memory. What is returned to you is the browser's conversion of the current DOM back to HTML, which is the reason for the uppercase tags and lack of self closing tags where applicable.
An XMLHttpRequest would be the best way to go. In most cases, assuming the server doesn't send the no-cache header, and the HTML page has finished downloading, the XMLHttpRequest would be almost instant because the file is fetched from the cache.
For accessing JS of the same origin, XMLHttpRequest is quite fine. You can have access to any JS document in "raw" format using this technique without the browser getting in the way (i.e. conversion to DOM and back).
I am not sure I understand your comment re: XMLHttpRequest being stupid : is it because you are worried about the potential duplication of work? i.e. getting the code 2times from the origin server.
I typically use FireBug when I want to peruse or copy source files.

Command line URL fetch with JavaScript capabliity

I use curl, in php and httplib2 in python to fetch URL.
However, there are some pages that use JavaScript (AJAX) to retrieve the data after you have loaded the page and they just overwrite a specific section of the page afterward.
So, is there any command line utility that can handle JavaScript?
To know what I mean go to: monster.com and try searching for a job.
You'll see that the Ajax is getting the list of jobs afterward. So, if I wanted to pull in the jobs based on my keyword search, I would get the page with no jobs.
But via browser it works.
you can use PhantomJS
http://phantomjs.org
You can use it as below :
var page=require("webpage");
page.open("http://monster.com",function(status){
page.evaluate(function(){
/* your javascript code here
$.ajax("....",function(result){
phantom.exit(0);
}); */
});
});
Get FireBug and see the URL for that Ajax request. You may then use curl with that URL.
There are 2 ways to handle this. Write your screen scraper using a full browser based client like Webkit, or go to the actual page and find out what the AJAX requesting is doing and do request that directly. You then need to parse the results of course. Use firebug to help you out.
Check out this post for more info on the subject. The upvoted answer suggests using a test tool to drive a real browser.
What's a good tool to screen-scrape with Javascript support?
I think env.js can handle <script> elements. It runs in the Rhino JavaScript interpreter and has it's own XMLHttpRequest object, so you should be able to at least run the scripts manually (select all the <script> tags, get the .js file, and call eval) if it doesn't automatically run them. Be careful about running scripts you don't trust though, since they can use any Java classes.
I haven't played with it since John Resig's first version, so I don't know much about how to use it, but there's a discussion group on Google Groups.
Maybe you could try and use features of HtmlUnit in your own utility?
HtmlUnit is a "GUI-Less browser for
Java programs". It models HTML
documents and provides an API that
allows you to invoke pages, fill out
forms, click links, etc... just like
you do in your "normal" browser.
It has fairly good JavaScript support
(which is constantly improving) and is
able to work even with quite complex
AJAX libraries, simulating either
Firefox or Internet Explorer depending
on the configuration you want to use.
It is typically used for testing
purposes or to retrieve information
from web sites.
Use LiveHttpHeaders a plug in for Firefox to see all URL details and then use the cURL with that url.
LiveHttpHeaders shows all information like type of method(post or get) and headers body etc.
it also show post or get parameters in headers
i think this may help you.

Categories