Take a screenshot of a webpage with JavaScript? - javascript

Is it possible to to take a screenshot of a webpage with JavaScript and then submit that back to the server?
I'm not so concerned with browser security issues. etc. as the implementation would be for HTA. But is it possible?

Google is doing this in Google+ and a talented developer reverse engineered it and produced http://html2canvas.hertzen.com/ . To work in IE you'll need a canvas support library such as http://excanvas.sourceforge.net/

I have done this for an HTA by using an ActiveX control. It was pretty easy to build the control in VB6 to take the screenshot. I had to use the keybd_event API call because SendKeys can't do PrintScreen. Here's the code for that:
Declare Sub keybd_event Lib "user32" _
(ByVal bVk As Byte, ByVal bScan As Byte, ByVal dwFlags As Long, ByVal dwExtraInfo As Long)
Public Const CaptWindow = 2
Public Sub ScreenGrab()
keybd_event &H12, 0, 0, 0
keybd_event &H2C, CaptWindow, 0, 0
keybd_event &H2C, CaptWindow, &H2, 0
keybd_event &H12, 0, &H2, 0
End Sub
That only gets you as far as getting the window to the clipboard.
Another option, if the window you want a screenshot of is an HTA would be to just use an XMLHTTPRequest to send the DOM nodes to the server, then create the screenshots server-side.

Another possible solution that I've discovered is http://www.phantomjs.org/ which allows one to very easily take screenshots of pages and a whole lot more. Whilst my original requirements for this question aren't valid any more (different job), I will likely integrate PhantomJS into future projects.

Pounder's if this is possible to do by setting the whole body elements into a canvase then using canvas2image ?
http://www.nihilogic.dk/labs/canvas2image/

A possible way to do this, if running on windows and have .NET installed you can do:
public Bitmap GenerateScreenshot(string url)
{
// This method gets a screenshot of the webpage
// rendered at its full size (height and width)
return GenerateScreenshot(url, -1, -1);
}
public Bitmap GenerateScreenshot(string url, int width, int height)
{
// Load the webpage into a WebBrowser control
WebBrowser wb = new WebBrowser();
wb.ScrollBarsEnabled = false;
wb.ScriptErrorsSuppressed = true;
wb.Navigate(url);
while (wb.ReadyState != WebBrowserReadyState.Complete) { Application.DoEvents(); }
// Set the size of the WebBrowser control
wb.Width = width;
wb.Height = height;
if (width == -1)
{
// Take Screenshot of the web pages full width
wb.Width = wb.Document.Body.ScrollRectangle.Width;
}
if (height == -1)
{
// Take Screenshot of the web pages full height
wb.Height = wb.Document.Body.ScrollRectangle.Height;
}
// Get a Bitmap representation of the webpage as it's rendered in the WebBrowser control
Bitmap bitmap = new Bitmap(wb.Width, wb.Height);
wb.DrawToBitmap(bitmap, new Rectangle(0, 0, wb.Width, wb.Height));
wb.Dispose();
return bitmap;
}
And then via PHP you can do:
exec("CreateScreenShot.exe -url http://.... -save C:/shots domain_page.png");
Then you have the screenshot in the server side.

This might not be the ideal solution for you, but it might still be worth mentioning.
Snapsie is an open source, ActiveX object that enables Internet Explorer screenshots to be captured and saved. Once the DLL file is registered on the client, you should be able to capture the screenshot and upload the file to the server withing JavaScript. Drawbacks: it needs to register the DLL file at the client and works only with Internet Explorer.

We had a similar requirement for reporting bugs. Since it was for an intranet scenario, we were able to use browser addons (like Fireshot for Firefox and IE Screenshot for Internet Explorer).

This question is old but maybe there's still someone interested in a state-of-the-art answer:
You can use getDisplayMedia:
https://github.com/ondras/browsershot

The SnapEngage uses a Java applet (1.5+) to make a browser screenshot. AFAIK, java.awt.Robot should do the job - the user has just to permit the applet to do it (once).
And I have just found a post about it:
Stack Overflow question JavaScript code to take a screenshot of a website without using ActiveX
Blog post How SnapABug works – and what they should do

I found that dom-to-image did a good job (much better than html2canvas). See the following question & answer: https://stackoverflow.com/a/32776834/207981
This question asks about submitting this back to the server, which should be possible, but if you're looking to download the image(s) you'll want to combine it with FileSaver.js, and if you want to download a zip with multiple image files all generated client-side take a look at jszip.

You can achieve that using HTA and VBScript. Just call an external tool to do the screenshotting. I forgot what the name is, but on Windows Vista there is a tool to do screenshots. You don't even need an extra install for it.
As for as automatic - it totally depends on the tool you use. If it has an API, I am sure you can trigger the screenshot and saving process through a couple of Visual Basic calls without the user knowing that you did what you did.
Since you mentioned HTA, I am assuming you are on Windows and (probably) know your environment (e.g. OS and version) very well.

If you are willing to do it on the server side, there are options like PhantomJS, which is now deprecated. The best way to go would be Headless Chrome with something like Puppeteer on Node.JS. Capturing a web page using Puppeteer would be as simple as follows:
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://example.com');
await page.screenshot({path: 'example.png'});
await browser.close();
})();
However it requires headless chrome to be able to run on your servers, which has some dependencies and might not be suitable on restricted environments. (Also, if you are not using Node.JS, you might need to handle installation / launching of browsers yourself.)
If you are willing to use a SaaS service, there are many options such as
Restpack
UrlBox
Screenshot Layer

A great solution for screenshot taking in Javascript is the one by https://grabz.it.
They have a flexible and simple-to-use screenshot API which can be used by any type of JS application.
If you want to try it, at first you should get the authorization app key + secret and the free SDK
Then, in your app, the implementation steps would be:
// include the grabzit.min.js library in the web page you want the capture to appear
<script src="grabzit.min.js"></script>
//use the key and the secret to login, capture the url
<script>
GrabzIt("KEY", "SECRET").ConvertURL("http://www.google.com").Create();
</script>
Screenshot could be customized with different parameters. For example:
GrabzIt("KEY", "SECRET").ConvertURL("http://www.google.com",
{"width": 400, "height": 400, "format": "png", "delay", 10000}).Create();
</script>
That's all.
Then simply wait a short while and the image will automatically appear at the bottom of the page, without you needing to reload the page.
There are other functionalities to the screenshot mechanism which you can explore here.
It's also possible to save the screenshot locally. For that you will need to utilize GrabzIt server side API. For more info check the detailed guide here.

As of today Apr 2020 GitHub library html2Canvas
https://github.com/niklasvh/html2canvas
GitHub 20K stars | Azure pipeles : Succeeded | Downloads 1.3M/mo |
quote : " JavaScript HTML renderer The script allows you to take "screenshots" of webpages or parts of it, directly on the users browser. The screenshot is based on the DOM and as such may not be 100% accurate to the real representation as it does not make an actual screenshot, but builds the screenshot based on the information available on the page.

I made a simple function that uses rasterizeHTML to build a svg and/or an image with page contents.
Check it out :
https://github.com/orisha/tdg-screen-shooter-pure-js

Related

How to get transferred size of a complete page load?

With Selenium or JavaScript how could you get the (over the network) transferred size (bytes) of the loaded page including all the content, images, css, js, etc?
The preferred size is that of what goes over the network, that is compressed, only for the requests that are made, etc.
This is what you usually can see in dev tools, to the right in the network status bar:
If that's not possible, could one just get a total size of all the loaded resources (without compression, etc)? That would be an acceptable alternative.
The browser is Firefox, but if it could be done with some other Selenium compatible browser that would be acceptable also.
I guess this could be done using a proxy, but is there any JS or Selenium way to get such information?
If proxy is the only way, which one would one use (or implement) to keep things simple for such a task? Just implementing something in Java before setting up the driver?
(The solution should work at least on Linux, but preferably on Windows also. I'm using Selenium WebDriver via Java.)
For future reference, it is possible to request this information from the browser by javascript. However, at the time of writing no browser supports this feature for this specific data yet. More information can be found here.
In the mean time, for Chrome you can parse this information from the performance log.
//Enable performance logging
LoggingPreferences logPrefs = new LoggingPreferences();
logPrefs.enable(LogType.PERFORMANCE, Level.ALL);
capa.setCapability(CapabilityType.LOGGING_PREFS, logPrefs);
//Start driver
WebDriver driver = new ChromeDriver(capa);
You can then get this data like this
for (LogEntry entry : driver.manage().logs().get(LogType.PERFORMANCE)) {
if(entry.getMessage().contains("Network.dataReceived")) {
Matcher dataLengthMatcher = Pattern.compile("encodedDataLength\":(.*?),").matcher(entry.getMessage());
dataLengthMatcher.find();
//Do whatever you want with the data here.
}
If, like in your case, you want to know the specifics of a single page load, you could use a pre- and postload timestamp and only get entries within that timeframe.
The performance API mentioned in Hakello's answer is now well supported (on everything except IE & Safari), and is simple to use:
return performance
.getEntriesByType("resource")
.map((x) => x.transferSize)
.reduce((a, b) => (a + b), 0);
You can run that script using executeScript to get the number of bytes downloaded since the last navigation event. No setup or configuration is required.
Yes you can do it using BrowserMobProxy. This is a java jar which use selenium Proxy to track network traffic from client side.
like page load time duration, Query string to different services etc.
you can get it bmp.lightbody.net . This api will create .har files which will contain all these information in json format which you can read using
an online tool http://www.softwareishard.com/har/viewer/
I have achieved this in Python, which might save people some time. To setup the logging:
logging_prefs = {'performance' : 'INFO'}
caps = DesiredCapabilities.CHROME.copy()
caps['loggingPrefs'] = logging_prefs
driver = webdriver.Chrome(desired_capabilities=caps)
To calculate the total:
total_bytes = []
for entry in driver.get_log('performance'):
if "Network.dataReceived" in str(entry):
r = re.search(r'encodedDataLength\":(.*?),', str(entry))
total_bytes.append(int(r.group(1)))
mb = round((float(sum(total_bytes) / 1000) / 1000), 2)

Javascript Access to File on local machine

I want to open the files located on local drive using window.open().
When i try to access the file using window.open i am getting error "Access is denied."
Would somebody help to achieve this requirement in Internet explorer 8.0?
Thanks!
You can't. And thank God for that. Imagine how insecure the internet would've been if JS was able to access a client's file-system.
Of course, IE8 has the MS specific JScript superset (ActiveXObject), which does enable filesystem access:
var fileHandle,
fs = new ActiveXObject("Scripting.FileSystemObject");
fileHandle = fs.OpenTextFile("C:\\path\\to\\file.tmp", 1, true);
fileHandle.Write('This is written to a file');
console.log(fileHandle.ReadLine());//will log what we've just written to the file
But this is non-standard, is - I think- no longer supported either, and doesn't work X-browser.
Here's the documentation. At the bottom there's a link to a more detailed overview of the properties and methods this object has to offer, as you can see, there's a lot to choose from
I'm adding this answer just to be complete, but so far as Web Pages go, Elias Van Ootegem's answer is correct: you can't (and shouldn't be able to) get to the local hard drive.
But .. you can isf your page is an HTA (HTML Application) :
HTML Application wiki
This is essentially a web page with .hta as the extension(usually) and some extra tags to tell IE that it's an HTA application, not a web page.
This is something that runs via the windows operating system and is so far as I'm aware only available for IE. The HTA application opens as a web page in IE, but without the usual web navigation / favourites toolbars etc.
Note that if you have a page on an internet server delivered as an HTA application, you're likely to cause virus scanners and firewalls to pop up because this would essenstially be running a script whcih could do manything to your computer. Not good for general internert stuff at all, but might be useful in a secure environment like an intranet where the source of the application is known to be safe.
To get to the file system, you can use javascript code like this :
// set up a Fils System Object variable..
var FSO = new ActiveXObject("Scripting.FileSystemObject");
// function to read a file
function ReadFile(sFile) {
var f, ts;
var s="";
if(FSO.FileExists(sFile))
{
f = FSO.GetFile(sFile);
ts = f.OpenAsTextStream(ForReading, TristateUseDefault);
if (!ts.AtEndOfStream) {s = ts.ReadAll( )};
ts.Close( );
}
return s;
}
alert(ReadFile("c:\\somefilename.txt");

How to start two or more custom URL Protocol from Javascript

I have an old html page that creates a script file and executes it using:
fsoObject = new ActiveXObject("Scripting.FileSystemObject")
wshObject = new ActiveXObject("WScript.Shell")
I am trying to modify it and make it usable also from other browsers. If you know the answer stop reading and please answer. If there is no quick answer, here is the description of my attempts. I was successful in doing the job, but only when the script is shorter than 2000 characters. I need help for scripts longer than 2000 characters.
The webpage is for internal use only, so it is easy for me to create a custom URL protocol on each computer that runs a VBScript file from a network drive.
I created my custom URL Protocol that starts a VBScript file like this:
Windows Registry Editor Version 5.00
[HKEY_CLASSES_ROOT\MyUrlProtocol]
"URL Protocol"=""
#="Url:MyUrlProtocol"
"UseOriginalUrlEncoding"=dword:00000001
[HKEY_CLASSES_ROOT\MyUrlProtocol\DefaultIcon]
#="C:\\Windows\\System32\\WScript.exe"
[HKEY_CLASSES_ROOT\MyUrlProtocol\shell]
[HKEY_CLASSES_ROOT\MyUrlProtocol\shell\open]
[HKEY_CLASSES_ROOT\MyUrlProtocol\shell\open\command]
#="C:\\Windows\\System32\\WScript.exe \"X:\\MyUrlProtocol.vbs\" \"%1\""
In MyUrlProtocol.vbs I have this:
MsgBox "The length of the link is " & Len(WScript.Arguments(0)) & " characters"
MsgBox "The content of the link is: " & WScript.Arguments(0)
When I click on click me I see two messages, so everything works well (tested with Chrome and IE in Windows 7.)
It works also when I execute document.getElementById("test").click()
I thought this could be the solution: I would pass the text of the script to the VBS static script, which would create the dynamic script and run it, but with this system I can't pass more than ~2000 characters.
So I tried to split the text of the script in chunks smaller than 2000 characters and simulate several clicks on the link, but only the first one works.
So I tried with xmlhttp.open("GET","MyUrlProtocol:test",false);, but Chrome says Cross origin requests are only supported for HTTP.
Is it possible to pass more than 2000 characters to a VBScript script via a custom URL protocol?
If not, is it possible to call several custom URL protocols in sequence?
If not, is there another way to create a script file and run it from Javascript?
EDIT 1
I found a solution, but in Chrome only works when it likes, so I'm back to square one.
The code below in IE executes the script 4 times (correct), but in Chrome only the first execution runs.
If I change it to delay += 2000, then Chrome usually runs the script 2 times, but sometimes 1 and sometimes 3 or even 4 times.
If I change it to delay += 10000, then it usually runs the script 4 times, but sometimes misses one.
The function is always executed 4 times, both in Chrome and IE. What is weird is that the sr.click() sometimes does nothing and the function execution continues.
<HTML>
<HEAD>
<script>
var delay;
function runScript(text) {
setTimeout(function(){runScript2(text)}, delay);
delay += 100;
}
function runScript2(text) {
var sr = document.getElementById('scriptRunner');
sr.href='intelliclad:'+text;
sr.click();
}
function test(){
delay = 0;
runScript("uno");
runScript("due");
runScript("tre");
runScript("quattro");
}
</script>
</HEAD>
<BODY>
<input type="button" value="Run test" onclick="test()">
scriptRunner
</BODY>
</HMTL>
EDIT 2
I tried with Luke's suggestion of setting the next timeout from inside the call back but nothing changed (IE works always, Chrome whenever it likes).
Here is the new code:
var scripts;
var delay = 2000;
function runScript() {
var sr = document.getElementById('scriptRunner');
sr.href = 'intelliclad:' + scripts.shift();
sr.click();
if(scripts.length)
setTimeout(function() {runScript()}, delay);
}
function test(){
scripts = ["uno", "due", "tre", "quattro"];
runScript();
}
Some background: The page asks for the shape of a panel, which can be just a few parameters [nfaces=1, shape1='square', width1=100] or hundreds of parameters for panels with many faces, many slots, many fasteners, etc. After asking for all the parameters a script for our internal 3D CAD (which can be larger than 20KB) is generated and the CAD is started and asked to execute the script.
I would like to do all on the client side, because the page is served by a Domino web server, which can't even dream of managing such a complex script.
I didn't read your whole post...have an answer:
I too wish that custom url protocols can handle long urls. They simply do not. IE is even worse as some OSs only accept 800 chars.
So, here's the solution:
For long urls, only pass a single use token. The vbscript uses the token
and does a url get to your web server to get all of the data.
This is the only way I've been able to successfully pass lots of data around. If you ever find a clearer solution, please remember to post it here.
Update:
Note that this is the best way I have found to deal with the url protocol limitations. I too wish this was not necessary. This does work and works well.
You mentioned Dominos, so possibly you need something in a POS environment... I create a web based POS system, so we could face a lot of the same issues.
Suppose you want a custom url to print a pdf to the default printer without the annoying popup window. We need to do this thousands of times a day...
When building the web page, add the print button which when pressed calls the custom url: myproto://printpdf?id=12345&tocken=onetimetoken
this will execute your vbscript on the local desktop
in your vbscript, parse the arguments and react. In this case, your command is printpdf and the id is 123456 and you have a onetime tocken key.
have the vb script to an https get to: https://mydomain.com/APIs/printpdf.whatever?id=12345&key=onetimetoken
check the credentials based on the ip address and token, if all aligns, then return the contents of the pdf (you may want to convert the pdf to a byte array string)
now the vbscript has the pdf, assemble it and write it to a temp folder then execute a silent pdf print command (I use Sumatra PDF http://blog.kowalczyk.info/software/sumatrapdf/free-pdf-reader.html)
mission accomplished.
Since I do know what you what to do in your custom url and the general workflow, I can only describe how I've solved the sort url issue.
Using this technique, the possibilities are limitless. You have full control over the local computer running the web browser, you have a onetime use token which grants access to a web API with can return any sort of information you program.
You could write a custom url protocol to turn on the pizza oven if you wanted :)
If you are not able to create the server side code which is listening for vbscript's get request then this would not work.
You might be able to pass the data from the browser to the vbscript using the clipboard.
Update 2:
Since in this case the data is on the client (one single form can define hundreds of parameters), the server API doesn't know what to answer to the vb script request. So the workflow described above must be preceded by these two steps:
The onkeypress event executes a submit to send the current parameters to the server
The server replies with the refreshed form, adding to the body onload a call to a function which uses another submit to call the custom url, as described on point 1 listed above.
Update 3:
stenci, what you've added (in Update 2) will work. I would do it like this:
user presses a button saying I'm done editing the form
ajax post the form to the server
the server saves the data and attaches unique key to the datastore
the server returns the key to ajax callback function
now the client has a single use key and invokes the url schema passing the key
vbscript does an https get to the server and passes the key
server returns the data to the vbscript
It is a bit long winded. Once coded it will work like a charm.
The only other alternative I can see is to copy the form data to the clipboard using something like: http://zeroclipboard.org/
and then in vbscript see if you can read the clipboard like: Use clipboard from VBScript
How about creating an iFrame for each instance?
Something like this:
function runScript(text) {
var iframe = document.createElement('iframe');
iframe.src = 'intelliclad:'+text;
document.body.appendChild(iframe);
}
function test(){
runScript("uno");
runScript("due");
runScript("tre");
runScript("quattro");
}
You can then use css styling to make these iframes transparent / hidden.
You might not like this answer, but I've used this method in the past and it works.
Instead of relying on ActiveX, consider using a Java Applet, and JNI.
Basically, you have to make sure the native scripts you want to run are available on your client machine, along with a JNI wrapper.
The applet will have to be at least self signed, for the browser to allow it to load and access a native library. Once the JNI libraries are loaded, you can easily call methods from the page / applet.
As a consequence of using Java, you could possibly use the same applet for windows as well as linux clients, provided of course you have native libraries present on the respective clients.
This series of articles talks about precisely your problem : http://www.javaworld.com/article/2076775/java-security/escape-the-sandbox--access-native-methods-from-an-applet.html
P.S the article is really old, but the concept remains unchanged.

Programmatically open new browser tab with Silverlight and set content

Was wondering if there is any way in Silverlight to open a new browser tab and set its content. In short, my app receives files (binary data) and needed to have the user's browser presenting them.
My app downloads contents (images/pdfs/whatever) from repositories from the cloud and stores them as binary data in a local cache; then after that I need a way to display those now local contents to the end user in a new tab. The "new tab" requirement is due to silverlight not supporting rendering of many file types such as .gif, .pdf and others - things that browsers handle easily, either natively or with widely used plugins. So my current WTF-y solution uses System.Windows.Browser and consists in the following:
// Get document and body
var doc = HtmlPage.Document;
var body = doc.Body;
// Create a <form> element and add it to the body
var newForm = doc.CreateElement("form");
newForm.SetAttribute("action", "www.example.com/contentpresenter.php");
newForm.SetAttribute("enctype", "multipart/form-data");
newForm.SetAttribute("method", "POST");
newForm.SetAttribute("target", "_blank");
body.AppendChild(newForm);
var inp = doc.CreateElement("input");
inp.SetAttribute("type", "text");
inp.SetAttribute("name", "mcontent");
inp.SetAttribute("value", Tools.ToBase64( content.Content as Stream ));
newForm.AppendChild(inp);
var inpt = doc.CreateElement("input");
inpt.SetAttribute("type", "text");
inpt.SetAttribute("name", "tcontent");
inpt.SetAttribute("value", content.ContentType);
newForm.AppendChild(inpt);
// Send away!
newForm.Invoke("submit");
In short, it creates a javascript script that posts the content to a remote PHP script which in turn does nothing more than decoding and presenting the content, which will open in a new tab. Yes, I'm fully aware of how idiotic it sounds - but does the trick and works as intended.
As far as I know, creating a new HtmlWindow and building up/altering its contents is not an option due to security constraints. An obvious option is having Silverlight produce javascript which would in turn create a new tab that loads the provided content, but javascript is not too big in handling binary or base64 data - at least not cross-browser seamlessly - and the whole thing seems stupid anyways.
Is there a solution to achieve this solely through Silverlight, or at least with a minimum amount of javascript involved? Alternatively, is there any javascript library you would recommend to handle base64 data?
Best regards!
I recomend you to find Telerik's silverlight components sources and use RadHtmlPlaceholder (slightly buggy).
+ You can enable trusted application to run inside the browser for SL 5 and use WebBrowser control (best quality) but for windows only.

PDF files do not open in Internet Explorer with Adobe Reader 10.0 - users get an empty gray screen. How can I fix this for my users?

There is a known issue with opening a PDF in Internet Explorer (v 6, 7, 8, 9) with Adobe Reader X (version 10.0.*). The browser window loads with an empty gray screen (and doesn't even have a Reader toolbar). It works perfectly fine with Firefox, Chrome, or with Adobe Reader 10.1.*.
I have discovered several workarounds. For example, hitting "Refresh" will load the document properly. Upgrading to Adobe Reader 10.1.*, or downgrading to 9.*, fixes the issue too.
However, all of these solutions require the user to figure it out. Most of my users get very confused at seeing this gray screen, and end up blaming the PDF file and blaming the website for being broken. Honestly, until I researched the issue, I blamed the PDF too!
So, I am trying to figure out a way to fix this issue for my users.
I've considered providing a "Download PDF" link (that sets the Content-Disposition header to attachment instead of inline), but my company does not like that solution at all, because we really want these PDF files to display in the browser.
Has anyone else experienced this issue?
What are some possible solutions or workarounds?
I'm really hoping for a solution that is seamless to the end-user, because I can't rely on them to know how to change their Adobe Reader settings, or to automatically install updates.
Here's the dreaded Gray Screen:
Edit: screenshot was deleted from file server! Sorry!
The image was a browser window, with the regular toolbar, but a solid gray background, no UI whatsoever.
Background info:
Although I don't think the following information is related to my issue, I'll include it for reference:
This is an ASP.NET MVC application, and has jQuery available.
The link to the PDF file has target=_blank so that it opens in a new window.
The PDF file is being generated on-the-fly, and all the content headers are being set appropriately.
The URL does NOT include the .pdf extension, but we do set the content-disposition header with a valid .pdf filename and the inline setting.
Edit: Here is the source code that I'm using to serve up the PDF files.
First, the Controller Action:
public ActionResult ComplianceCertificate(int id){
byte[] pdfBytes = ComplianceBusiness.GetCertificate(id);
return new PdfResult(pdfBytes, false, "Compliance Certificate {0}.pdf", id);
}
And here is the ActionResult (PdfResult, inherits System.Web.Mvc.FileContentResult):
using System.Net.Mime;
using System.Web.Mvc;
/// <summary>
/// Returns the proper Response Headers and "Content-Disposition" for a PDF file,
/// and allows you to specify the filename and whether it will be downloaded by the browser.
/// </summary>
public class PdfResult : FileContentResult
{
public ContentDisposition ContentDisposition { get; private set; }
/// <summary>
/// Returns a PDF FileResult.
/// </summary>
/// <param name="pdfFileContents">The data for the PDF file</param>
/// <param name="download">Determines if the file should be shown in the browser or downloaded as a file</param>
/// <param name="filename">The filename that will be shown if the file is downloaded or saved.</param>
/// <param name="filenameArgs">A list of arguments to be formatted into the filename.</param>
/// <returns></returns>
[JetBrains.Annotations.StringFormatMethod("filename")]
public PdfResult(byte[] pdfFileContents, bool download, string filename, params object[] filenameArgs)
: base(pdfFileContents, "application/pdf")
{
// Format the filename:
if (filenameArgs != null && filenameArgs.Length > 0)
{
filename = string.Format(filename, filenameArgs);
}
// Add the filename to the Content-Disposition
ContentDisposition = new ContentDisposition
{
Inline = !download,
FileName = filename,
Size = pdfFileContents.Length,
};
}
protected override void WriteFile(System.Web.HttpResponseBase response)
{
// Add the filename to the Content-Disposition
response.AddHeader("Content-Disposition", ContentDisposition.ToString());
base.WriteFile(response);
}
}
It's been 4 months since asking this question, and I still haven't found a good solution.
However, I did find a decent workaround, which I will share in case others have the same issue.
I will try to update this answer, too, if I make further progress.
First of all, my research has shown that there are several possible combinations of user-settings and site settings that cause a variety of PDF display issues. These include:
Broken version of Adobe Reader (10.0.*)
HTTPS site with Internet Explorer and the default setting "Don't save encrypted files to disk"
Adobe Reader setting - disable "Display PDF files in my browser"
Slow hardware (thanks #ahochhaus)
I spent some time researching PDF display options at pdfobject.com, which is an EXCELLENT resource and I learned a lot.
The workaround I came up with is to embed the PDF file inside an empty HTML page. It is very simple: See some similar examples at pdfobject.com.
<html>
<head>...</head>
<body>
<object data="/pdf/sample.pdf" type="application/pdf" height="100%" width="100%"></object>
</body>
</html>
However, here's a list of caveats:
This ignores all user-preferences for PDFs - for example, I personally like PDFs to open in a stand-alone Adobe Reader, but that is ignored
This doesn't work if you don't have the Adobe Reader plugin installed/enabled, so I added a "Get Adobe Reader" section to the html, and a link to download the file, which usually gets completely hidden by the <object /> tag, ... but ...
In Internet Explorer, if the plugin fails to load, the empty object will still hide the "Get Adobe Reader" section, so I had to set the z-index to show it ... but ...
Google Chrome's built-in PDF viewer also displays the "Get Adobe Reader" section on top of the PDF, so I had to do browser detection to determine whether to show the "Get Reader".
This is a huge list of caveats. I believe it covers all the bases, but I am definitely not comfortable applying this to EVERY user (most of whom do not have an issue).
Therefore, we decided to ONLY do this embedded option if the user opts-in for it. On our PDF page, we have a section that says "Having trouble viewing PDFs?", which lets you change your setting to "embedded", and we store that setting in a cookie.
In our GetPDF Action, we look for the embed=true cookie. This determines whether we return the PDF file, or if we return a View of HTML with the embedded PDF.
Ugh. This was even less fun than writing IE6-compatible JavaScript.
I hope that others with the same problem can find comfort knowing that they're not alone!
I don't have an exact solution, but I'll post my experiences with this in case they help anyone else.
From my testing, the gray screen is only triggered on slower machines [1]. To date, I have not been able to recreate it on newer hardware [2]. All of my tests have been in IE8 with Adobe Reader 10.1.2. For my tests I turned off SSL and removed all headers that could have disabled caching.
To recreate the gray screen, I followed the following steps:
1) Navigate to a page that links to a PDF
2) Open the PDF in a new window or tab (either via the context menu or target="_blank")
3) In my tests, this PDF will open without error (however I have received user reports indicating failure on the first PDF load)
4) Close the newly opened window or tab
5) Open the PDF (again) in a new window or tab
6) This PDF will not open, but instead only show the "gray screen" mentioned by the first user (all subsequent PDFs that are loaded will also not display -- until all browser windows are closed)
I performed the above test with several different PDF files (both static and dynamic) generated from different sources and the gray screen issue always occurs when following the above steps (on the "slow" computer).
To mitigate the problem in my application, I "tore down" the page that links to the PDF (removed parts piece by piece until the gray screen no longer occurred). In my particular application (built on closure-library) removing all references to goog.userAgent.adobeReader [3] appears to have fixed the issue. This exact solution won't work with jquery or .net MVC but maybe the process can help you isolate the source of the issue. I have not yet taken the time to isolate which particular portion of goog.userAgent.adobeReader triggers the bug in Adobe Reader, but it is likely that jquery might have similar plugin detection code to that used in closure-library.
[1] Machine experiencing gray screen:
Win Server '03 SP3
AMD Sempron 2400+ at 1.6GHz
256MB memory
[2] Machine not experiencing gray screen:
Win XP x64 SP2
AMD Athlon II X4 620 at 2.6 GHz
4GB memory
[3] http://closure-library.googlecode.com/svn/docs/closure_goog_useragent_adobereader.js.source.html
I ran into this issue around the time MVC1 was first released. See Generating PDF, error with IE and HTTPS regarding the Cache-Control header.
For Win7 Acrobat Pro X
Since I did all these without rechecking to see if the problem still existed afterwards, I am not sure which on of these actually fixed the problem, but one of them did. In fact, after doing the #3 and rebooting, it worked perfectly.
FYI: Below is the order in which I stepped through the repair.
Go to Control Panel > folders options under each of the General, View and Search Tabs
click the Restore Defaults button and the Reset Folders button
Go to Internet Explorer, Tools > Options > Advanced > Reset ( I did not need to delete personal settings)
Open Acrobat Pro X, under Edit > Preferences > General.
At the bottom of page select Default PDF Handler. I chose Adobe Pro X, and click Apply.
You may be asked to reboot (I did).
Best Wishes
In my case the solution was quite simple.
I added this header and the browsers opened the file in every test.
header('Content-Disposition: attachment; filename="filename.pdf"');
I had this problem. Reinstalling the latest version of Adobe Reader did nothing. Adobe Reader worked in Chrome but not in IE. This worked for me ...
1) Go to IE's Tools-->Compatibility View menu.
2) Enter a website that has the PDF you wish to see. Click OK.
3) Restart IE
4) Go to the website you entered and select the PDF. It should come up.
5) Go back to Compatibility View and delete the entry you made.
6) Adobe Reader works OK now in IE on all websites.
It's a strange fix, but it worked for me. I needed to go through an Adobe acceptance screen after reinstall that only appeared after I did the Compatibility View trick. Once accepted, it seemed to work everywhere. Pretty flaky stuff. Hope this helps someone.
Hm, would it be possible to simply do this:
The first time your user opens a pdf, using Javascript you make a popout that basically says "If you cannot see your document, please click HERE". Make "HERE" a big button where it will explain to your user what's the problem. Also make another button "everything's fine". If the user clicks on this one, you remember it, so it isn't displayed in the future.
I'm trying to be practical. Going to great lengths trying to solve this kind of problem "properly" for a small subset of Adobe Reader versions doesn't sound very productive to me.
Experimenting more, the underlying cause in my app (calling goog.userAgent.adobeReader) was accessing Adobe Reader via an ActiveXObject on the page with the link to the PDF. This minimal test case causes the gray screen for me (however removing the ActiveXObject causes no gray screen).
<!DOCTYPE html>
<html lang="en">
<head>
<title>hi</title>
<meta charset="utf-8">
</head>
<body>
<script>
new ActiveXObject('AcroPDF.PDF.1');
</script>
<a target="_blank" href="http://partners.adobe.com/public/developer/en/xml/AdobeXMLFormsSamples.pdf">link</a>
</body>
</html>
I'm very interested if others are able to reproduce the problem with this test case and following the steps from my other post ("I don't have an exact solution...") on a "slow" computer.
Sorry for posting a new answer, but I couldn't figure out how to add a code block in a comment on my previous post.
For a video example of this minimal test case, see: http://youtu.be/IgEcxzM6Kck
I realize this is a rather late post but still a possible solution for the OP. I use IE9 on Win 7 and have been having Adobe Reader's grey screen issues for several months when trying to open pdf bank and credit card statements online. I could open everything in Firefox or Opera but not IE. I finally tried PDF-Viewer, set it as the default pdf viewer in its preferences and no more problems. I'm sure there are other free viewers out there, like Foxit, PDF-Xchange, etc., that will give better results than Reader with less headaches. Adobe is like some of the other big companies that develop software on a take it or leave it basis ... so I left it.
We were getting this issue even after updating to the latest Adobe Reader version.
Two different methods solved this issue for us:
Using the free version of Foxit Reader application in place of Adobe Reader
But, since most of our clients use Adobe Reader, so instead of requiring users to use Foxit Reader, we started using window.open(url) to open the pdf instead of window.location.href = url. Adobe was losing the file handle on for some reason in different iframes when the pdf was opened using the window.location.href method.

Categories