How can I do this:
the page loads
javascript loads a remote PDF file into local memory
the user clicks a button/link
the system launches the PDF reader or starts a download dialog with the PDF file already in memory
In other words, it's a regular file download in the browser EXCEPT that the file has already been loading in the background in order to speed up its receipt when/if the user decides to download the file.
You would have to encode the file (perhaps via a servlet), then you could get it through an XHR, and write it into a data uri, which you could then attach to a button or link.
This technique would probably only work on small files and very recent browsers.
StackOverflow won't let me post an example link as a link, so to test the concept, you'll have to copy the following line into an html file and see if you can load the link:
pdf link
This worked perfectly in Chrome when I tested it just now, and worked partially in Firefox. It didn't work at all in my version of IE.
Another potential solution is to make absolutely sure that the pdf is being cached, and then try to load it in a hidden iframe. Whether this works or not will depend on how the user has their browser set up.
You should consider not doing it at all, given the difficulties.
Related
I have some generic javascript that loads and displays photos on an html page based on an xml file that it reads. The problem is that if I change the content of the xml file, it doesn't show up on the web page. Here's a section of an xml file to give you an idea of what I'm talking about:
<slides>
<slide>
<name>slideshow/cal2018/20180506.jpeg</name>
<alt>some alt text</alt>
...
The problem that triggered my issue is I decided to move the "slideshow/" into the xml file instead of having it in the javascript. When I did that, the photos stopped showing up - I just got the alt text. However when I switched to a different page, the photos were brought up properly.
The problem is that all the web browsers seem to want to hold on to xml files for a long time - apparently at least a week, which is how long I worked on the pages without the current version of the xml file being loaded.
I can manually clear the browser cache and the problem resolves itself for that browser. Simply reloading the page, which is sufficient to reload the javascript, doesn't do that.
I use a lot of xml files to drive web pages. I need to ensure that the xml file is the current one and not some ancient cached copy. Is there a cross-browser (javascript?) way I can clear the (xml) cache before loading an xml file?
Also, why do browsers ignore xml files when I ask for a page reload?
I have a URL which onclick it downloads a file. Some of these files are quite big and it takes some time for the browser to receive it. This can be frustrating for the user, especially for the inexperienced ones, who don't realize that the file is about to be downloaded.
I tried to add the tag "download" in the element but it doesn't work for my Chrome, Firefox version. As an alternative I added:
target = "_blank"
Which opens a new blank page and once the download starts it closes it. But this solution is not very elegant.
Is there a way to track when the browser will receive the file? So I am able to show an indicator?
Good day,
I have a system that renders large amount of data through pdf ( 30mb + ). Now I want the user to view pdf first so he can either download it or just print it right away. as of the moment I am forcing the user to download the file since open( 'datauri here' ) wont work with larger files.. the problem with downloading is that files are multiplying and consumes space over time and also its not necessary for me to download all files that that they want to print right away.
I need a functionality that is similar to chrome's preview when using windows.print
can you please suggest any ideas or other things to do this?
I am currently using javascript library to create pdf (pdfmake). I am also using chrome as my main browser
You would have to make sure that the PDF is optimized for fast web view, and that your server is using the byteserving protocol for serving the file.
If that is the case, a useful PDF viewer (such as the web browser component provided by Acrobat/Reader) understands this protocol and requests (after the first page plus overhead of the PDF) only the data for the pages which are to be displayed.
A quick search did, however, not reveal whether the Chrome PDF viewing component is smart enough to understand the byteserving protocol.
I am automating a website with WebDriver but my file download needs are a little different from those I have found when googling.
I have a website which creates orders. When I click the 'place order' button, it redirects me to the Print Order page. As a "convenience" for me, this page automatically launches the download operation in its body onload, meaning that when I click the 'place order' button, I go to a new page and then a file download dialog immediately appears to let me download the generated pdf, thus blocking the browser.
Here are the solutions I found/thought of, and why I couldn't use them:
Configure Firefox/Chrome profiles to silently download files. Can't use this because I have the requirement of timing how long the download takes.
Override window.open with a function that prevents the download, and allows me to grab the URL and download it with wget. Can't use this because the file download is started from the onload function of the next page, so any javascript I run on this page will be lost.
Cancel the onload function or try to execute code before the onload function. Can't find a way to do this in webdriver.
Download the print page with wget, modify the html to change the onload handler, and inject the modified html back into selenium. Can't find a way to replace an entire page, including <head> and <body> tag and URL.
Unfortunately I can't change the source code of this website because I am in QA and I don't have that sort of leverage with development. Does anyone have any ideas for a way to download this file in an automatic manner that can be timed?
Thanks.
Seems to me you have 2 options.
Use an AUTOIT script to download the file once you receive the popup, this tests the functionality of the page opening and the download.
Download the file just using http get. This misses out testing that the page loads but will still check that the file exists.
The other tests you mention above seem to be too complicated.
We have a file handling ASP.Net web control used in intranet web applications, that currently uses ActiveX to handle file check-outs, and check-ins. Works fine, in IE&Win.
But now we are trying to get rid of ActiveX & IE only behavior...
If a file is checked out, it is copied to file share, with access right limited to the checking-out user.
Using a hidden iframe, and setting the src of the iframe to something like "file:////file_share/dictionary/users_stuff/someDoc.doc", an open/download dialog is shown, so the user can open and edit the shared file in Word, Excel, etc directly from the file share.
Works fine for file types browsers can't handle themselves.
But for file types like txt, images, html the browser simply loads the file to the iframe, or opens the file, if the user is given a link. And the user can't edit the file without manually launching the appropriate application and copying the url. Showing the users a "Copy this url to your preferred application, and try to edit it" would not be really user friendly...
My question is: is it possible to get the browser (without ActiveX, IE...) to pass the link to the OS, or show a "What do you want to do with this file" dialog of some sort?
If not, what and how could be achieved?
The closest I could come up with is this thread:
https://stackoverflow.com/a/1394725/1195927
#Red says no (and the original poster agrees) BUT #Daan seems to have a solution.
I have not tested, so YMMV.
If a javascript/html solution is not found, I may have an ugly hack for you . . .
See my post here: Launching a Downloadable Link