There is this 3rd party webservice. One of the public webmethods available is a GetDocument() method. This method returns a Document object. The Document object has properties for File(byte[]), ContentType(string) ect.
My Question : Can I subscribe to this service using javascript(mootools) + ajax + JSON, return the document object, in this case an excel document, and force the file download?
It is true that typically you cannot initiate a download from JavaScript, but there is a flash component, Downloadify that does enable client side file generation.
So you can serve files for download from HTML/JavaScript.
With that problem solved, you still have the problem of how to get the data that you wish to serve from the source web service.
3rd party implies XSS (cross site scripting) which is a no-no using XmlHttpRequest (Ajax).
A possible solution to this problem could be to use a common hidden IFrame technique to get the data.
Simply have an appropriate (hidden?) form that correctly posts to the web service and point it's action to an hidden IFrame element upon which you are trapping the Load event and parse the data returned.
But current browsers have different levels of security measures that limit your ability to access IFrames with an external source so you are actually stuck here. Sorry to get your hopes up.
The only practical robust way to accomplish what you would like to do is to have a local server side script that can act as a proxy between your HTML/JavaScript and the external web service.
Using such a proxy, you can simply go back to using Ajax to get your data to serve up with Downloadify.
But then, since you are using a server script to get the data, why not just serve the data from the script for download?
These are just my observations on the problem domain you present.
Related
I have shared folder between in my server which will allow other server to send XML file to me and I want my script read this file auto without opening any page.
I know how to open and read the file.
But the issue how to auto load in the backhand.
you have to create a one page which will read the provided file and do the required actions , then share this URL and format with the team who will going to provide you the xml file.
It is very much like API Endpoint, Where you have to write the code which will handled request and in this scenario your Endpoint will treat as a server and XML file provider will treat as clients.
I hope this answer helps u.
Thanks
Traditionally, you need your server to periodically execute the script which reads the XML. That PHP will need to parse the XML and handle the changes.
Alternatively, the source of the API can use push notification to avoid polling with your server. The XML will be received whenever a change occurred on the server without the creation of a lot of useless requests, but the XML will be parsed as in the previous approach.
Last, but not least, you can use WebSockets for this purpose, or if both computers are in the same network, you can use sockets. Off course, a lot depends on the data source, whether you have access there, how modern is its technology and what does it allow you to do.
This is my situation:
I have a third part that uses a software called microstrategy which is able to generate documents and allow to export them as PDF or Excel files. They provide me only web api of this product, and I haven't any web service to work with.
The url is like:
http://<third_part_domain>/microstrategy/asp/Main.aspx?Server=<third_part_domain>&Project=<project_name>&evt=3069&src=Main.aspx.3069&executionMode=3&promptAnswerMode=1&documentID=<doc_id>&uid=<username>&pwd=<password>&<other_parameters_for_request>
I have try to obtain the file (that I must save on server side) by java code, but the response of the link that we use is an HTML page with some code Javascript that does more than one redirect, so I can not interpreted correctly the response and I should use a browser to obtain the PDF.
So I have thought to put the page into a iframe and after a while (usually the server takes 20 second) take the PDF object by javascript code and send to my server. But obviously the third part have another domain and the CORS policies block everything. To make matters worse, I can not use the final url to obtain the file because the microstrategy respond me with an internal page of the administration console.
So, that's my question:
Is there a way (that is not on the microstrategy server side) to obtain directly the PDF from microstrategy?
Or exists a way from client side to bypass the problem of origin control? I have evaluated to implement a proxy for solution but it's too expensive.
Thanks to all!
You need two things in order to download a PDF from MicroStrategy using a URL:
In the document property set that default visualization as PDF. This is pretty trivial and I think any of your MicroStrategy savvy colleague can help you with this.
Disable the waiting page, this is more complicated. When MicroStrategy generates a documents, usually it needs some time, meanwhile the server is working it will show you a waiting page. Useful if the request comes from a human (the human can go on StackOverflow), not that much if the call arrives from API.
The instruction to disable the waiting page are here: TN34124: How to Disable the Wait Page in MicroStrategy Web using the MicroStrategy Web SDK 9.x.
But I read from your question that you have no control on the third party MicroStrategy application. In that case there is little you can do. You can try to ask them to implement the customization to remove the waiting page or allow you to use taskproc API, but that's a story for another day.
Some options:
Ask the third party to schedule the PDF generation on their side and send it via mail to you. Or place it on a shared folder that is shared between you.
Ask for a different URL Tuareg from the file-share menu options. This will give a URL with 'subscriptionid' in it.
I am trying to search an outside url for content matching "title" and return the results to my HTML page in the background through Javascript. I have been using Javascript and not found any resources that resolve my query, maybe I'm asking wrong?
but I would basically search the document with :
var title = document.getElementsByName("title");
The hard part is connecting to the page and searching through the HTML source code.
TIA!
You can't generally get the content from an outside URL unless server specifically allows you to do so. But, you can do it from server side. You will be able to get the content of any URL from your server. Server must include an header in response with name access-control-allow-origin which contains patterns/name of your domain.
However, you can do it from server side anyway, unless you are blocked specifically by the server.
You will need to develop a solution in which you grab the content for your outside URL from your server. It can be anything like PHP, Node.js, C# etc. After receiving response from the external server, deliver it in response to the browser using AJAX or anything. Then you can play with it anyway you want using JavaScript or JQuery.
Important Note:
Make sure whatever you are trying to access in anyway, you are allowed to do so. If they (your outside URL) wants to share something with public, they must be providing some APIs or other solutions to allow you access to their content.
Research has led to to a solution, implementing a scraper. There are many in existence,scrapy for instance. Just a head's up for those with the same question.
I am writing a social media engine using Javascript and PHP, with flat files as my main information transfer tool. When my program adds to text files that are over a day old, they will not show up when requested by an AJAX program, until they are accessed directly by a URL and refreshed twice. Is there a way to prevent this from happening? Please do not suggest the use of a database.
The most likely reason to why you need to access the flat files directly by URL and refresh twice is that your browser is caching them. Refreshing updates the browser's cache with the latest version.
When a web server is serving static content it tells the web browser to cache the content for quite a while since the content being static is unlikely to change for some time.
When a web server is serving dynamic content that almost always means that the content is going to change very fast and that it might be a bad idea to cache it.
Now the reason to why you shouldn't access your flat files directly with AJAX is not because of the cache issue (although it does resolve the issue) but because of security. What happens if you have some secret information in the file? Sure you can tell the browser not to fetch that part but the user still will have full access (by URL) to the file.
The same way you don't let the browser access your database you don't let the browser access your flat files directly. This also means that they should be stored outside the document root or protected from public access by other means.
I want to fetch particular HTML contents from remote websites url.
The website URL is as follow,
http://www.realtor.com/realestateandhomes-detail/10216-Montwood-Drive_El-Paso_TX_79925_M78337-06548
I want to fetch some specific information from above website url.
Here I attached image it highlight the specific area I want to all highlighted portion from there is a title,image, and descriptions.
How can I fetch the contents using JQuery or Javascript or Json call?
Is any other way to get these?
You might be interested in checking out pjscrape (disclaimer: this is my project). It's a command-line tool using PhantomJS to allow scraping using JavaScript and jQuery in a full browser context.
Scrapers can be written in straight Javascript, executed in the context of the site you're scraping, with a very simple, jQuery-friendly syntax.
It can scrape a single page, an array of pages, or you can define a function to look for more URLs to spider on each page.
It supports JSON and CSV output, either to file or to STDOUT
If the site is static and the structure is uniform, it should be very fast to scrape all the content you need into a structured data format.
This will help you out:
http://papermashup.com/use-jquery-and-php-to-scrape-page-content/
When scraping content, it is vital to consider the following:
Is the content static html or will part of it's content be rendered by ajax-calls?
In the first case, simple http-get-routines like the one used in JNDPNT's comment's Link will be sufficient.
In the second case, you may want to look at automating Selenium via it's Webdriver.
In any case it might be better to ask your colleague if he can provide you with an interface to the raw data, e.g. over a webservice.
If I'm getting you right, you want The user's Browser to scrape The content of another Domain on The Fly, right?
That will Not Be Possible without proxying The Request through some Script on The Same Domain (or via a jsonp Request to a Service that returns The HTML to you) due to The Same Origin Policy.
Sorry to disappoint.
Use the Yahoo Pipes (http://pipes.yahoo.com/pipes/ )service.
This can be used to grab and manipulate the page HTML, extracting the bits you want. Data can then be posted server side using the Web Service module or sent directly to the clients browser using an ordinary javascript callback.